37 – John MacFarlane

Recorded 2023-08-09. Published 2023-11-14.

Joachim Breitner and David Christiansen interview John MacFarlane, a professor of philosophy at UC Berkeley, but also the author of the popular pandoc document conversion tool, which has been around half as long as Haskell itself. He also explains the principle of uniformity as a design goal for lightweight markup languages, the relationship between philosophy and programming, and along the way he helps David with his markdown difficulties.


This transcript may contain mistakes. Did you find any? Feel free to fix them!

Joachim Breitner (0:00:18): Today, our guest is John MacFarlane, a professor of philosophy at UC Berkeley, but also the author of the popular Pandoc document conversion tool, which has been around half as long as Haskell itself. He also explains the principle of uniformity as a design goal for lightweight markup languages, the relationship between philosophy and programming, and along the way, he helps my co-host, David Christiansen, with his markdown difficulties. 

Welcome, John, to this podcast episode recording. What made us reach out to you certainly was less what you’re famous for, and got your Wikipedia page, which is you being a professor of philosophy, but actually a tool you wrote called Pandoc, which may or may not, but plausibly is the most vitally used Haskell program out there. So, let’s maybe start with that before we get into the deep philosophical topics, and tell us, how did you actually get to do Haskell?

John MacFarlane (0:01:17): Well, I will start very far back because my involvement with computers is really as a hobbyist. I actually have had a couple of small computing jobs, but it was many, many years ago. So, I’ve been a computer hobbyist since the 1970s, I think, and the first computer that I started programming was actually a KIM-1 Microcomputer 6502 chip, 1k of memory. You programmed it by directly inserting bytes into the 1000 – was it 1k of memory that it had, I think, and then pressing run. And actually, I still have that computer, and it still works perfectly. 

So, I was kind of a computer hobbyist through high school. And when I started college, I worked at Los Alamos Lab as a computer programmer during the summers. That’s where I’m from. So, I had a little bit of experience programming from that. But then, when I started graduate school, I pretty much fell out of the computer hobbyist thing, which is probably a good thing for my graduate studies. And I didn’t have anything to do with it. I didn’t really like Windows and all that stuff. 

So, I didn’t do anything with computers for a long time. And then for some reason, which I can’t recall at the moment, after I’d gotten my PhD and gotten my job as an assistant professor at Berkeley, I started getting interested again in Linux and computer programming and so on. And I learned about some of the new languages, like Python. And then I learned about OCaml list, and that brought me to Objective Caml. 

And then around about 2004, I think, I read on a blog by a philosophical logician friend, Greg Restall, who was a professor in Australia at the time. He posted something about this book he had found called The Haskell Road to Logic, Maths, and something or other. And he talked about how, if he had known about Haskell back in the day, he probably wouldn’t have become a philosopher. He would’ve been so excited about computing. So, I said, “Well, this sounds interesting.” So, I looked into Haskell, and I think at the time, they didn’t have very many resources on it. I think the thing that I learned it from was a PDF called A Gentle Introduction To Haskell.

David Thrane Christiansen (0:03:49): I learned it from that too. It wasn’t very gentle. 

JM (0:03:52): It wasn’t bad, though. Anyway, I learned a little bit of Haskell, and then I decided to try to just fool around with it and write some stuff. And this is the funny thing, is that I started writing Pandoc right then. That was the beginning of Pandoc. It was like practically my first Haskell program because at that time, markdown had recently come out and I had been using a restructured text, which is kind of similar to light markup language for my lecture notes and things like that. But I’d been a bit unhappy with some of the tooling around it. And I thought, well, let me just see if I can write a parser for markdown because, after all, Haskell’s supposed to be good for that, and Parsec looks so nice. And then I’ll have a bit of control over my own tools.

So, I just started doing it just as a way to mess around with Haskell. And I wrote a markdown parser, and then I wrote an HTML formatter for it. And then I thought, well, I’ve got all this stuff in restructured text, so I better write a restructured text parser. But I better write a restructured text writer because what if I switch all my stuff to markdown, but then I decide I don’t like it? I need to be able to go back again. And then I need a LaTeX writer because I need to format stuff.

So, before long, I had basically that much material, and that’s kind of the core of Pandoc right there. And it was all very crude. I didn’t really know what I was doing much at all. I barely knew Haskell, but Haskell kind of forces you to get it right. The compiler won’t make you make too many mistakes. 

So, I had this thing, and then I kept working on it and using it in my own work. And then, at some point, I don’t remember when exactly, but I think it was 2006—I decided to publish Pandoc on my website. I just put up a webpage that had a brief description of it and a tarball. I still have that. And it’s funny because cabal-install didn’t exist at the time, and Hackage didn’t exist. And so, the instructions for building Pandoc are, well, you just run ghc –make. And I guess I didn’t use any libraries that didn’t come with the compiler. 

So, that was a state of things. Okay. So, I stuck this thing on my website, and I didn’t really expect anyone to even really discover it there. But I’m not sure how. But somebody did. Again, in 2006, a trainee at the time, a trainee Debian developer, Recai Oktas, contacted me and said that he wanted to package Pandoc, I guess, as part of his training to become a certified Debian developer. And I said, “Great.” And so we started doing that, and I learned a lot more then about packaging things professionally and so on. And it became a Debian package. And of course, that really dramatically raised its visibility and started getting used more.

JB (0:06:52): So, you’re saying Pandoc got a package in Debian before it was really popular, and that was what gave it the popularity boost back then?

JM (0:07:00): Yeah, that’s my impression. I don’t think that very many people knew about it or were using it. And maybe I’m wrong. It was a long time ago, so I can’t remember exactly. But I don’t know how they would’ve found out about it. It was on my website, and that’s it. There was no Hackage. There was no central repository. There’s no way to find it other than that. So, it is kind of unusual, actually. You’d think it wouldn’t really meet the bar for inclusion. And maybe I’m forgetting something about its use, but that’s my memory.

JB (0:07:32): It’s pretty impressive that the first Haskell program you write becomes this very successful and popular tool. So, when you initially wrote it, you didn’t already have in mind that it’s going to be this tool that takes all these end different input languages to end different output languages, but rather it started with just being a markdown render. Did you have to revise the internals a lot to turn it into what it was now, or did it just come out like that?

JM (0:08:02): Well, again, I don’t remember exactly because it was a while ago, but I know that I did start out with an AST in Haskell. So, I started out with the idea that we’d be parsing it to some sort of language-independent data structure. So, I think I did have the idea from the beginning that it could be rendered into different formats. And I certainly had restructured text as a model, and I also had a practical need to render things in both HTML and in PDF or LaTeX. So, I think that was my idea from the start. I certainly didn’t envision quite the scope that it’s gotten to now. 

DTC (0:08:39): Yeah, typically never do.

JM (0:08:41): No. 

DTC (0:08:43): So, an interesting thing about Pandoc is that people typically want to use it based purely on its merits. It’s not sort of a “I’m already a committed Haskeller. I’ve got to find the Haskell way to do this thing.” It’s more “Pandoc is the best program for doing this thing that I want to do, thus I must learn Haskell in order to do it.” And this makes me wonder, do potential contributors to the project ever express difficulties with learning Haskell, or do you think that that’s been a source of joy or a source of pain, or both, or neither?

JM (0:09:17): I think this is one of the issues that people typically talk about when they’re talking about the drawbacks of starting a project in Haskell. “Oh, you won’t be able to get as many contributors because they don’t know Haskell.” It is true that there aren’t as many people who are equipped to contribute. For me, that’s been fine because I already get more pull requests and things that I can really process effectively. So, I’m kind of glad that there’s not more people who are in a position to contribute. So, I don’t think it’s been a problem in practice, although I don’t really have anything to compare it to. I know that one person claims to have written a complete module for Pandoc without knowing any Haskell before, just essentially taking an existing writer and using it as a pattern. And then, I guess, the compiler tells you when you’re getting it wrong.

So, I think Pandoc is, in general, written in fairly simple Haskell. I don’t use too many complicated things in there. And that’s partly due to the fact that I wasn’t a very sophisticated Haskeller when I started, and I’m still not that much more sophisticated now. So, I think it’s possible for people who are generally familiar with computer languages to look at some Pandoc codes sometimes, even if they don’t know Haskell, and figure out what might be needed. But yeah, it probably does cut down the number of people who can contribute. I haven’t really gotten complaints about it.

DTC (0:10:40): Maybe it doesn’t cut down on contributions. I mean, I can imagine a world where people want to learn Haskell so they can work on it. So, they just do, and it’s fine.

JM (0:10:49): Right. I think many of the users of Pandoc have no idea that it’s written in Haskell. And when you say Haskell, they think, “Oh, Pascal, I studied that in high school.”

DTC (0:10:58): Yeah, yeah. Speaking of reaching users that don’t use Haskell and don’t want to cabal-install their tools, how was your experience with distributing Haskell program for different platforms? What is your approach to distributing on Linux where, I guess, you can do static linking or something else? How did that work out for you?

JM (0:11:20): Well, what I do now is I produce binaries for all the major platforms. And the one that’s for Linux is statically linked using Alpine. So, it’s just really, really portable. And I produce them for both ARM and x86 architectures. And so, all of those are on the Pandoc website and releases. And so, every time I do a release, I run a job that generates all of these binaries. And that seems to be the best way to do it because then I don’t have to rely on – as you may know, Debian is pretty far behind in packaging Pandoc because, in order to do that, they have to package all of the dependencies, which are extensive. So, I don’t have to rely on that. I can just say, “Go to the website and install that, and you’re fine.” And Haskell has been great for producing binaries for all the major platforms. That’s been very straightforward, I think. I mean, it’d be nice if we could cross-compile easily. That would be better rather than having to spin up a machine with each different OS and build separately. But I take it that that’s coming eventually. 

DTC (0:12:30): There’s a lot of renewed interest in cross-compilation because of the desire to have better support for running Haskell and browsers. And there, typically, it’s always cross-compilation that you’re doing. So, that’s adding some extra impetus to fixing the issues with that right now. So, I also expect to see it get easier pretty soon. 

JB (0:12:48): Yeah. I actually published a project in a similar way, and in that case, at least the Windows build is cross-compiled on Linux, the Mac OS build unfortunately not. So, it’s not complete from Linux solution. And it’s using Linux, but maybe let’s not go into this in every topic, every time on a podcast. So, that means basically, from the side of your end users, they don’t know this about Haskell, they don’t complain about it being – it sounds like there’s no downside of using Haskell when you’re just providing a command line tool cross-platform?

JM (0:13:21): I really don’t think there is.

DTC (0:13:24): Great.

JB (0:13:26): Okay. That was a fun one.

DTC (0:13:28): What do you see as the big upsides of using Haskell over the years? I mean, if you could roll back the time 16 years, would you pick Haskell again for Pandoc, knowing now what it could become?

JM (0:13:39): Well, considering that a major motivation for doing it was simply wanting to fool around with Haskell, then yes, I would. And I think it’s worked out great. I think that nowadays, Rust is available. I haven’t really played with it that much, but it looks quite good. And maybe, from an engineering point of view, something like that would make sense for a project like this too. But Haskell has been far superior to most of the alternatives that I’ve been aware of. And the main thing is just the types make it easy to refactor the project and know that you aren’t missing anything, right? So, if I want to change something, it’s so easy in Haskell. You just go in and you change something in one module, and then you just keep running the compiler and changing things till it stops telling you what you have to change. Now, compare that to doing a JavaScript project or something. You change something, and it’s like, you actually have to figure out all the possible effects and what else might depend on it and run zillions and zillions of tests. 

So, I think as a solo developer – well, I’m not really a solo developer. I should mention, I’ve had a lot of very good collaborators with Pandoc through the years. I mean, I’ve probably done more myself than anyone else has, but it’s not a big project. There’s usually only a few people who are working actively on it at any one time. And with a project like that where we have limited time, it’s really been helpful to have the type system help us along, give us confidence when we’re changing the project that we’re not going to blow it up. So, I’d say that is the number one thing that I found compelling about using Haskell. 

Another thing that I like about it is just that it’s fun. It’s a fun language, ergonomic, you can make it look elegant. Not that Pandoc code is all that elegant, but you can make it look as elegant as you want. You can do lots of abstract things, and that appeals to me. It’s sort of intellectually more fun than a lot of other languages. So, that keeps my interest up, I think, more than certain other things. 

JB (0:15:47): But despite this praise of typed languages, the extension language for Pandoc is Lua, which is as untyped as JavaScript, I guess.

JM (0:15:56): Yeah. Well, I’ll tell you the reason for that. So, originally, one of the ways to extend Pandoc is this idea of filters. And the way we originally did that was what we called JSON filters. So, we would have Pandoc produce a JSON serialized AST, spit it out to standard output, and then you’d have a filter that would read that from standard output, modify it, put it out again, then Pandoc would read it in again. So, you just transform the AST, you could use any language you want, but it was particularly convenient to use Haskell. So, the original concept was you write a filter in Haskell, which can easily manipulate the AST. So, the filter is really simple code. There’s a little glue code, but basically, your filter just says, “Take an emphasis node and change it to a bold,” or something like that, and it’ll just do it throughout the document. So, that was great. Only problem is, well, these filters either have to be compiled or the person has to have the right Haskell interpreter on their machine, and—this is the real problem—all the libraries. And in the early days of Haskell, people would cabal-install the libraries globally. Then, it wasn’t too hard. You say, “Okay, install these libraries and then run this filter with GHC, it’ll be fine.” But then once Cabal changed to sort of sandbox everything, you couldn’t do that anymore. 

And so, it really became difficult to use filters. They had to have a lot of Haskell stuff installed and they had to have it installed properly and they had to have libraries set up right. So then I thought, well, what if we just build a Lua interpreter into it, then they don’t need any external interpreter at all. And there’s other advantages to that as well. So, one advantage is you skip all the serialization and deserialization, which adds a lot of time to what it takes to use a filter. So, with the Lua filters, the Lua interpreter is directly hooked into the Pandoc code, so it can just go in and mess around with the types and things directly. There’s very little overhead to running a Lua filter, and you need no external software. So, it’s huge gains, I think, in usability. It’s true you don’t have the types, and it can make it a little more difficult to debug, but I think overall it’s been a big win. And there are lots of projects now that are using just tons of Lua filters on Pandoc. So, I think R Markdown, which is used for the R statistics language, is one. It’s just a bunch of, as I understand it, Lua filters. I hope that’s – that might be inaccurate actually. So, maybe I’m getting confused between R Markdown and another project, but there’s a philosophy journal now that runs with Pandocs and a whole mess of Lua filters. So, it’s turned out to be a pretty practical decision.

JB (0:18:38): Are you now running into problems where previously you could change the AST easily, and because if you have the types in the refactoring and it’s great in  Haskell, you just change everything and it works again, and now you have to worry about all these untied Lua filters out there that you maybe don’t want to break anymore? 

JM (0:18:55): Yeah, that is a problem, actually. The way that the Lua filters are interacting with the AST is through functions that we provide. Okay? So, there’s a Lua API that we provide. So, we can change that as we change the AST. And so, lots of changes, it’s fine. But there are certain changes. Yeah, if you add a new AST element or something, if you were to radically change one of the existing elements, it could break existing filters. And I mean, honestly, that’s the problem for filters in Haskell too. They would still need to be rewritten. We’ve dealt with that by being pretty conservative about changing the AST. The AST changes as rarely as possible, and changes there are pretty disruptive. So, yeah, I don’t know a better solution to that problem, but that is something that we’ve dealt with. 

DTC (0:19:44): Yeah. So, 16 years of Haskell, that’s I think almost half the time that Haskell’s been around at all. What’s gotten better and what’s gotten worse?

JM (0:19:53): Well, certainly, Hackage and cabal-install and things like that, that’s been great. It has gotten better. That’s been a huge improvement, although it’s also allowed me to make Pandoc depend on a huge kitchen sink of other things, which it didn’t before. But that’s been very good. It’s worked quite reliably for me, I think. The addition of the text library has been good. Certainly, that was needed. That’s a long time ago now. 

What else has been good? I don’t really use too many of the fancy IDE features. I mean, I guess I use the Haskell language server, but I don’t really care about it that much. I mean, if it doesn’t start up on my editor, I don’t really miss it that much. Usually, I don’t use that extensively. So, that’s not really an issue. But the package ecosystem has been good, and I think it works quite reliably now. There was a period where we had cabal hell, but I think we’re well beyond that now. 

DTC (0:20:53): Oh yeah, years and years. I don’t know if well beyond hell sounds good or bad. I guess it depends on the direction and whether it’s circular, the hell, or not.

DTC (0:21:06): I’ve felt like I’ve been in, not necessarily cabal heaven lately, but at least cabal purgatory.

JB (0:21:12): Yeah. We’re probably walking back again from the seven layers of cabal hellishness. 

JM (0:21:18): Exactly.

DTC (0:21:19): I think sandboxes were purgatory. And now that we have the Nix-style builds, I think we’re well into limbo territory, but –

JM (0:21:26): That’s true. Yes. As long as you have enough disc space.

DTC (0:21:30): Yes. So, interestingly, I didn’t hear you mention anything about Haskell, the language, in those 16 years of improvements that you’ve mentioned. Is this because of your conservative approach to language features where the new things aren’t that relevant? Or is there some other thing going on there?

JM (0:21:46): No, it is partly that. So, I think recently I’ve started cutting off support for earlier GHC versions, but I used to try to support GHC back to – well, I don’t know, like at least 8.4. And so, I just couldn’t use new language features. And so, I usually didn’t even really bother to learn about them. I can’t remember what the last version we’re supporting is. I’m going at least three or four versions back, but that would allow me to use a lot more features if I wanted to. It’s just that I’m not used to them. So, I should probably look into them, but I think –

DTC (0:22:22): Or not. If you’re happy with what you’ve got right now, then you know.

JM (0:22:25): I try to keep it simple, usually.

JB (0:22:27): I guess there’s a flip side to new features in Haskell, and people sometimes complain about Haskell upgrades or new compiler versions breaking lots of code. How was that for you? Do you find new versions of GHC give you headaches and sleepless nights before you look at them, or is it smooth sailing?

JM (0:22:46): Yeah, it’s usually not smooth sailing at all. I mean, for one thing, you have to wait for all the dependent libraries to be updated for the new base version and so on. And that usually takes quite a while. And then sometimes, there’s – yeah, I usually try to compile with a -Wall and there’s always little warnings that come up. And then you think, well, this version will give me one warning, the other one won’t. And so, it can be a hassle. It can be a bit of a hassle. But I mean, I think it’s just sort of inevitable. You got to move the language along.

JB (0:23:16): I mean, that is a question that we keep asking ourselves. Is it inevitable or should we try harder to not give you that hassle? Or is it worth having that hassle for all the good things you may get out of it? And of course, you’re the prime audience for having Haskell program in production running for 16 years. So, obviously, that matters a lot what you think.

JM (0:23:39): Yeah. Well, I don’t know. I mean, I think Haskell – obviously, Haskell is used for a lot of things. And one of the things that Haskell has always been for is a kind of test bed for interesting ideas about functional programming. I wouldn’t want that to be given up just to make it easier for me to keep this long-running program going.

DTC (0:23:56): But if we can find ways to continue innovating while not disturbing others, that’s even better, right?

JM (0:24:02): Yes. Even better, yes. Even better for that. Yeah. 

DTC (0:24:05): Yeah. So, you’ve also been involved in the CommonMark standardization effort. For those who aren’t aware about it, what is CommonMark?

JM (0:24:11): So, CommonMark is an attempt to give a more rigorous and precise specification of the details of markdown syntax, essentially, and it confines itself to the core features of that syntax without worrying too much about extensions and things at this point. It’s a project that – I mean, it’s not really finished. What we have has been pretty stable for quite a long time, and I think it’s had an effect. I think most of the going markdown implementations are trying to conform to it, and I think that’s good. We have a lot less variation about what counts as a sub-list and things than we used to. But I don’t consider it complete. There’s conceptual difficulties that I’ve never been able to really solve.

DTC (0:24:53): Interesting. What are some of those conceptual difficulties, if you don’t mind getting a little crunchy on the podcast?

JM (0:24:58): So, I’ve gone into some of them in this essay called Beyond Markdown, which you can find on my website. So, one of the difficulties is I wanted to have something I called the principle of uniformity. And that means that if a text, if a block of text means something, then it should still mean the same thing when it’s put into a context like a list item or a block quote or something like that, that putting it in that different context shouldn’t change its meaning. So, now consider text, which has a line of text, and then on the next line has, say, 34, period, space, and then some more text. Okay? Now, what John Gruber did in the original markdown is, I mean, he saw that sometimes you’d have hard wrap lines like that. And in the original markdown, they had created list items. And so, he excluded that case and he actually had a test case for things like that so that those wouldn’t create a list item. So, you could have a hard wrap line with the number at the beginning. 

So, the basic principle we have so far is when it’s by itself, you have one line and then another line starting with a number, no space in between. That’s not a paragraph followed by a list item, right? Now, take that whole thing and put it in a list. So, indent it to the right, and now put a number like 1 period in front of it. Well, now most markdown parsers will interpret that as a list item, which has a paragraph and then a sub-list. So, that same thing, which outside of the list, was just one paragraph. Now, inside the list has a different meaning. It’s a paragraph, followed by a list. I didn’t like that. That violates the principle of uniformity. 

Now, I think a lot of people don’t care about that type of thing, but that was a kind of guiding principle in designing CommonMark. And I couldn’t figure out a good way to really maintain that principle. Right now, we have a kind of weird compromise where we’ll allow a list to start if the list marker at the beginning is 1, but not if it’s anything else, which is a bit ugly. 

JB (0:27:11): Is that better or is it worse? 

JM (0:27:13): I don’t know. I mean, you’re going to avoid capturing like dates at the end of sentences and things at least. But what if you want to start a list with a number other than 1? It’s just a bit of a kludge. So, that was one of the conceptual difficulties that I talked about in this document. 

There are various edges like that that could never really square to my satisfaction. And actually, recently I’ve played around with creating a new markdown syntax, which I call djot, D-J-O-T, that is built on the principles that I articulate in this document. So, in that syntax, you have to leave a blank line to start a list. Lots of people just absolutely hate that. And what I’ve found is that there’s a big – I think it might be almost a generational conflict—I’m not sure—between the hard wrappers and the non hard wrappers. I mean, people say, “That’s just insane. You should never put a carriage return inside a paragraph. You just have it all be on one line and let your editor wrap it.” But I’m of the old school who thinks that when you write a text document, it should look good and be readable without any special formatting from an editor, so hard wrapping is okay. And we need to make room for it. If you make room for it, you have to deal with the fact that you might have numbers at the beginning of a line. And so this problem arises. And so, for me, it’s an acceptable cost to do what restructured text already did and require a blank line before a list. 

So, that’s an example of the kind of thing that I never could quite come to a decision about in thinking about CommonMark. But that said, I think that the syntax we have for CommonMark is quite stable. It changes periodically, but it’s been workable and continues to be refined gradually.

DTC (0:29:04): So, I recently wrote a quite long document using markdown, and I found that the lack of an established syntax for extensions was quite frustrating. I know Pandoc has a lot of extensions on top of markdown, but they’re all, as far as I can tell – I mean, I want to say ad hoc without being pejorative in the sense that with restructured text, you can say like, “I’m going to make a new directive, and this is the syntax for fresh directives that are not assigned any semantics by the specification of restructured text.” And then your processing tools can then say, “Oh, I know what to do with the thing that says, ‘Oh, this one’s called math, or this one is called admonition, or something like that.’” Whereas for markdown, I always find myself writing – when I need an extension, I find myself writing some kind of regular expression-based workflow and always being a little bit worried that I’m doing it wrong. As an influential markdown user and developer and author of many extensions who previously used restructured text, what are your thoughts on this? Is there a better way?

JM (0:30:04): Right. So, I think what we need and what Pandoc already has actually is you need a generic container for literal text block, a generic container for literal text inline, a generic container for formatted text block, and a generic container for formatted text inline. When I say “generic container,” I just mean you can put whatever you want in there and you can attach arbitrary attributes to it. So, Pandoc has that. So, you’ve got code blocks, you’ve got code inline, you can attach attributes to both, and you’ve got an attribute syntax, and you’ve got what we call fenced divs, which are block containers. You can attach attributes to those. And we’ve got bracketed spans, which can contain inline text. So, if you’ve got those things, you’ve got a well-defined extension mechanism because now you just have your filter hook into – you take your filter and you say, “Look for a span with the class foo and then do this with it.” So, that’s something that I think would be desirable to have generally in markdown. And this djot language that I described has that as well. 

I think when we were doing CommonMark – well, I mean, the history that it’s – originally, there was this sort of team of people from various tech companies and then me who were working on it. And there was a lot of discussion over the period of a year or so about lots of details. And I think in these discussions, I had tried to push for something a little more like what Pandoc has with a well-defined structured syntax for attributes, but it didn’t really prevail on that. So, we ended up with something much less well-defined and without some of those containers that we really need. Another thing is that we were really aiming to just sort of codify the existing core markdown syntax. And so, we had left intentionally extensions to another time. So, that all is sensible. I think that it would be a reasonable extension of CommonMark, and probably something that’s needed to add some constructions like that. I agree completely; that’s what’s needed. 

DTC (0:32:19): And then if it’s okay if I bother you about one more markdown detail that has bothered me when writing a long technical document? 

JM (0:32:26): Oh, definitely. 

DTC (0:32:27): And markdown has a very clear conceptual split between the block elements and the inline elements, and yet I often want to have a single paragraph that contains blocks. So, for example, I may want to have a paragraph of text which then has an equation or a multi-line code snippet, or a bulleted list, and then continues as conceptually the same paragraph. So, like if I were to render it with LaTeX , the continuation lines would not be indented, for instance. And I haven’t found a way to indicate that either. Do you think that’s ever in the cards, or do you have a good workaround?

JM (0:33:00): I agree with you completely, and I’ve run into this problem many times. I think recently I’ve been using a small Lua filter to deal with this. It’s a bit of a hack. But what I do is I start a paragraph. If it’s a continuation paragraph, I start with an underscore and then a space. And then I have a Lua filter look for that pattern.

DTC (0:33:18): That seems like a perfectly reasonable way to do it.

JM (0:33:20): And it adds like a LaTeX, no indent. So, that is a bit of a hack, but it works. But there should be something built in. I think you’re right. Conceptually, often block quotes are part of a paragraph rather than a separate block. So, maybe it’s a bad decision to think of a block quote as a block element at all. Maybe it should have been an inline element all along. I mean, display math in Pandoc is an inline element, actually. So, with display math, you don’t run into the problem. But maybe there are other things besides block quotes where this comes up. So, I don’t know. I’ve been bothered by this for a long time, and I feel that there should be some way of dealing with it, but I’m not quite sure what it is.

JB (0:34:00): What I find interesting in this discussion, especially about extension points, is that it feels very similar to the kind of discussions we have when we try to evolve Haskell, where we have a new feature that we didn’t think about 10 years ago, and it’s kind of obviously wandered. And now we’re suddenly stuck in this very narrow syntactic space. And people spend time thinking of, can we maybe reuse some keyword that’s already a keyword in a place where it wasn’t appearing before? And suddenly, we have these things that are only keywords in some places and not in other places. And I don’t know if there’s anything to discuss here, but it probably takes some amount of foresight to design a language, whether it’s a programming language or a markdown language, that is able to support all the features you want, you can’t think of at the moment that you’re designing the language.

JM (0:34:51): Yeah, it’s always a huge problem. And you can’t change things without breaking many existing documents. So, you’re kind of stuck with what you figured out at the beginning, which wasn’t good enough because you hadn’t run into these cases.

DTC (0:35:04): And in the defense of markdown, it started off as an alternative way to write HTML, and HTML inherits from SGML the idea of block and inline things where a paragraph is a block and it may not contain other blocks. So, markdown comes by it, honestly.

JM (0:35:20): Right. And I think one reason for that is that HTML documents are rarely presented with the paragraphs indented in the first line, like they are in a book. So, if the paragraphs are all flush left, then it really doesn’t matter if it’s a new paragraph or a continuation. The problem I always have is that I send something to a journal after sending it through Pandoc, and then the proofs come back and it’s like they are indenting the paragraphs and you have to tell them, “Oh no, this is a continuation paragraph.”

DTC (0:35:49): Yeah. Turning to the matter of journals, you are a working philosopher; you’re not primarily a Haskell developer, notwithstanding the fact that you seem to be quite good at it. And as far as I understand, you mostly work in philosophy of language. Is that correctly understood?

JM (0:36:03): Well, I think of myself as a kind of a jack of all trades because I’ve worked in a number of different areas, but that is probably the predominant one. Yes.

DTC (0:36:10): So, what is the philosophy of language? For programmers out there who are listening to this who didn’t have that as part of their education or things they’re interested in, what sorts of problems is the philosophy of language interested in and what sorts of tools is it using to solve them?

JM (0:36:25): Well, there are many issues, of course, but I think some of the things that I’ve been particularly interested in are how to understand discourse that you might regard as subjective in various ways. And so, that includes like discourse of taste, it’s tasty, but also things like discourse using epistemic modals when you say, “Joe might be at work today.” That intuitively expresses your lack of knowledge about whether he is. So, how do we understand that kind of discourse? And there are various models for understanding it, which all have different problems. And so, that’s one of the things I’ve been concerned to try to get clear about. 

Vague discourse is another thing that I’ve been working on recently. How do we communicate using vague language? How does that work? How do we get things across, and what is it that we’re getting across? 

Generally, when we’re working on these problems, we’re working within a certain kind of framework. Typically, I’m working in a framework which tries to construct formal models of meaning, not unlike what you might get in studying denotational semantics for programming languages or something like that. In fact, all this stuff has a kind of common root. Like Gottlob Frege came up with some of these ideas, and they’ve become common currency in formal semantics and linguistics and philosophy and also the theory of programming languages. So, that’s kind of the framework within which I’ve been working on this stuff. 

DTC (0:37:56): What do you think this framework has to say about programming languages? The reason I ask there is because programming languages, when we consider them from the perspective of doing programming language theory, are sort of formal artifacts that are intended where we either want to understand how to run them or what they mean in the sense of like denotational semantics. But a lot of the times, when I’m using a programming language, I’m not using it because I want it to run. I may do this as part of writing the program to make sure that I’m not making silly mistakes, but what I actually want to do is succinctly and accurately communicate an idea to somebody and say, “This is an algorithm,” or “This is a way to solve a problem,” or “This is a way to formulate a problem.” And I’ll often do that in Haskell or in Racket. Those are my kind of go-to languages for this sort of thing, depending on how many continuations I need to describe the problem. And usually, that’s not very many, which means more Haskell these days. So, what do you think the philosophy of language has to say about this kind of communicative act?

JM (0:39:00): So, about what? Using a computer program to get across ideas?

DTC (0:39:04): Or formal artifacts more generally, I guess, right?

JM (0:39:08): Right. In a way, this relates to some of the foundational work that the pioneers of modern logic were doing, Russel and Frege, when they were trying to formalize mathematics. And it’s funny, when you teach logic, say at the university, people think that logic is about proving stuff. “If I learn logic, I’ll be able to prove stuff.” And it’s really not. I mean, most of the stuff that you can prove with logic, a lot of it is kind of obvious anyway. You didn’t need to prove it with logic. What logic is good for is making your ideas clear, as you say. It’s for helping you to articulate the concepts that you need. And that is in fact, what Frege argued that it was good for in some of his early papers forming concepts.

So, I think what you say is right. And I actually use Haskell that way too sometimes. If I’m thinking about some kind of complicated thing and I want to understand it, I sometimes will just open up a Haskell file and try to represent it with Haskell types, which are very flexible. And somehow, that helps me. And sometimes if I’m trying to follow something, an article say, that’s developing a fairly complicated theory, I will just try to code it in Haskell because then I’m sure I understand everything. It’s a good way to do that.

DTC (0:40:39): So, you’re talking here about all these ideas coming back to this early 20th century or, I guess, late 19th century sort of common roots, right, where you’ve got like Frege and then who inspires those foundations of mathematics crew. And then we kind of branch off, right? We get mathematicians who say, “Okay, well, that’s solved. We’re all formalists now, and we’re all going to work in set theory. And we’re not going to think about foundations ever again because it’s been solved. We don’t have to. And now we can just go back doing math, and we’re okay.” You’ve got the philosophers who have their own take on logic. And I know in a lot of US universities, the logic class is taught in the philosophy department rather than the mathematics department. And then you’ve got the computer scientists and the programming language people who have our own particular take on these things. What do you think is different about the ways that logic is approached by philosophers, mathematicians, and computer scientists?

JM (0:41:32): Right. Yeah. I don’t know as much about the computer science side actually, but I mean, nowadays, mathematicians are using logic to solve mathematical problems, essentially. As you say, many of them are not as interested in the kind of conceptual foundational issues anymore, whether rightly or wrongly. I mean, I don’t think that these issues are actually solved. We’ve made progress on them. But anyway, some mathematicians are interested in those things, but many don’t need to be. Philosophers use logic to attack philosophical problems. So, you might, for example, develop a logic for agency or temporal logic of agency to try to get clear about some of the concepts that are involved in agency and obligation and things like that. So, it’s just really just applying logic to different domains. I don’t know as much about the computer science side of things, although that’d be kind of a natural thing for me to be interested in. 

DTC (0:42:31): I guess a related question is, a lot of the conceptual underpinnings of the tools we use in functional programming come out of philosophy, right? Like Quine gave us referential transparency, which is dear to Haskellers, but also the notion of quasi-quotation, which is important to – you mentioned your background with the list; I also have a parenthetical background myself, and quasi-quotation comes up a lot in that context. I think the intentional/extensional distinction we get in type theory is basically like Frege’s sense and reference thing. Like, I don’t know what terms you like to use, where I know some people insist on using the words untranslated from German, but I feel like the morning star/evening star argument. But I haven’t seen a lot of this kind of hop-the-fence for a couple of decades. And I’m wondering, what are we missing out on? You’ve mentioned a lot of cool modalities that might be interesting. I know modal type systems are becoming popular as a way to implement all sorts of really cool stuff. But are there other conceptual tools that philosophers have been building that we can steal and use to make our programs better? 

JM (0:43:40): That’s a great question. I don’t know. I haven’t thought about that very much, but I should think about it. I don’t know if I can say anything.

DTC (0:43:47): That’s okay. It was a question.

JM (0:43:48): I mean, I think it’s an excellent question. You’re absolutely right. There should be things that could be borrowed and used that aren’t being. I have to put some more thought into that, I think.

DTC (0:44:01): I look forward to reading the paper.

JM (0:44:03): Okay. Yeah.

JB (0:44:05): So maybe going full circle, if we look back at the 16 years of Haskell programming that brought you here and Pandoc, what is your wishlist for Haskell for the next 16 years? What would you like Haskell to be different or the Haskell ecosystem, the Haskell language, the Haskell people, whatever?

JM (0:44:22): What is my wishlist?

JB (0:44:24): I mean, I’m very happy to hear that everything’s great and perfect and you shouldn’t be in any way different, but still be interested to hear what you might want to have instead. 

DTC (0:44:34): It should never change is also in itself a wishlist.

JB (0:44:37): Well, that’s true. Yeah.

JM (0:44:39): Well, I mean, there are advantages to that, of course, because then the old code still runs. I mean, I think, yeah, I’m not sure I have a lot to add to what people always say. I mean, the proliferation of string types is a nuisance. You got to convert between the lazy bite string and a strict text and stuff like that. Why do we have to deal with all this? But there are reasons, of course. The record syntax could be nicer to use, but it’s not a huge problem. But that would be a very good thing to do better on, I think.

JB (0:45:13): Have you already played around with some of the latest record-overloaded dot syntax things?

JM (0:45:18): No, I haven’t really played with them for the reasons that you give. I think performance has been a big issue for me and I’ve often been disappointed that my Haskell programs will run slower than a JavaScript program or something like that, which might even be using roughly the same algorithm. And then I try to figure out why. And I often don’t have much success. Sometimes I’m able to get somewhere on that. But for example, profiling things with Parsec is usually not that revealing I find. I guess there’s probably just a lot of overhead in the internals of Parsec that maybe you can’t avoid. But it is disconcerting that you can sometimes add one strictness notation somewhere, and it really, really speeds up your program. And then you’re like, “Well, what other places could I add one that would make a difference?” And it is hard to reason about that. I mean, this is something everyone says, but I think this is an issue. Haskell programs, typically the ones I write, take more memory than things I write in other languages to run. And that’s probably just me. That’s probably something that I could avoid if I really knew what I was doing. But on the other hand, I’ve been doing this for quite a while, and it should be easier to avoid making mistakes about memory. I don’t know how to fix that without radical changes in the language, of course. And we made some progress in –

JB (0:46:45): But it’s just this time. We don’t need to know the answer. It’s good to know what the problems are.

JM (0:46:49): Yeah, that’s definitely one. Yeah. The compiling is a bit slow. It’d be nice to speed that up. It’s getting less of an issue because computers are getting faster, I suppose, but Pandoc takes a long time to compile.

DTC (0:47:03): As somebody who compiles Pandoc on a regular basis, I agree that that would be excellent.

JM (0:47:08): Yes. There’s some other weird little issues. Like for example, I have this Skylighting library, which is for syntax highlighting. And what it does is it creates Haskell modules for each syntax. And these are created by reading the KDE XML syntax definitions, and then implementing that in a kind of state machine. And I ran into this problem, which may not exist anymore. I don’t know, because this was quite a few GHC versions ago. But what I used to do is write the modules out code generation, and then compile them. And it took forever to compile them because these modules mostly consisted in a very long record. And GHC just really got incredibly slow in compiling that kind of thing. 

So, what I ended up having to do was, instead of printing out the program itself, I used the binary library to convert it into a binary representation. And then I print out a program which says, “Decode this binary blob.” And now it compiles really fast. But that just seems like the kind of kludge that you shouldn’t need to have to do. I mean, there’s no reason why a language should run into extra problems when you have a really long record, for example, or a big module or things like that. So, those are things that seem like should be easy targets to improve. And I think that, in fact, they have been improving those, and for all I know, this problem that I ran into no longer exists.

DTC (0:48:34): But if the code works, why edit it?

JM (0:48:36): Exactly. Exactly. It works well now, so leave it that way. 

DTC (0:48:41): What does your Haskell development environment look like?

JM (0:48:43): Well, lately, I’ve usually been using Emacs with the Haskell language server. Occasionally, I use Vim. I kind of go back and forth between them. I’m sort of equally comfortable. Of course, I use Emacs with the Vim binding, so it’s much the same. But that’s usually what I do. Yeah. So, nothing very fancy at all. I don’t really rely on the fancy features of the IDE at all either. I don’t even really know what the bindings are for most of them. Most of what I do is, there’s a bug and you have to figure out what the bug is. And usually, that’s just a matter of looking at the code and thinking about it. Occasionally, I’ll put in some trace, show IDE-type stuff, but I didn’t do anything very fancy. 

JB (0:49:21 Wait. So, you are saying that there are bugs in Haskell programs?

JM (0:49:26): Well, perhaps we should hush that up.

JB (0:49:29): Okay. Yeah. Maybe we shouldn’t go too much into that. After all, if it compiles, it works. We know that. 

JM (0:49:35): It works. If it compiles, it does something anyway. This might not do what you want it to do.

JB (0:49:41): But definitely, it looks like your programs have done what people wanted because, well, it’s hugely successful. 

JM (0:49:49): Well, I mean, actually, they do that partly because I get a huge amount of feedback. And what people represent what they want on the bug tracker and Pandoc changes, and so it grows to fit what people want very often. I mean, of course, my own strong opinions are part of that, but –

JB (0:50:06): Well, I guess in that case, I’ll want to thank you both for Pandoc, which I’m also obviously using all over the place anytime I need to convert something or create a small static webpage, I think, but also for coming onto this podcast.

DTC (0:50:20): Thanks so much.

JM (0:50:21): Oh, thank you very much for having me.

Narrator (0:50:26): The Haskell Interlude Podcast is a project of the Haskell Foundation, and it is made possible by the generous support of our sponsors, especially the Monad-level sponsors: GitHub, Input Output, Juspay, and Meta.

Individual Sponsors
GitHub IOHK Juspay Meta
CarbonCloud Digital Asset ExFreight Mercury Obsidian Systems Platonic Systems Tweag Well-Typed
Artificial Channable FlipStone Freckle Google HERP MLabs TripShot
To learn more about the Haskell Foundation
Haskell Foundation, Inc.
2093 Philadelphia Pike #8119
Claymont, DE 19703