Lisp is the ultimate power tool. The language can be extended by the programmer in almost any way it sees fit, without having to wait for Lisp 2.0 or 3.0. It appears to be the ultimate road to infinite programmer expressivity. Nevertheless, with great power comes great responsibility. Are programmers capable of dealing with so much power? I talked to Manuel Simoni to find out more.
To start off, can you tell us something about yourself?
I’m 30 years old, and have been programming for half of that time, starting with Pascal and the TI-92’s language, then C++, Java, Ruby, O’Caml, Haskell, and Common Lisp. I’ve worked as programmer in a couple of Austrian software companies, and am currently a consultant. In my spare time I like to hack on programming languages and internet applications, but they’re not really past the toy stage yet.
As an aside: Is it LISP or Lisp, does it matter? Since Unix and UNIX are two different things.
Uppercase LISP stands for LISt Processing and was the original name of the language, in 1959. Back then, that meaning was more appropriate, because Lisp’s use of lists facilitated the comfortable writing of a Lisp evaluator in Lisp, which was a big discovery.
In an article on Paul Graham’s page Doug McIlroy, one of the inventors of Unix, relates how mind-blowing John McCarthy’s discovery of Lisp was. It’s worth including it here in full, because it gives an interesting view of computer science at the time:
Recursion had no place in mainstream programming at the time, nor did lambda calculus. Only two years before, I had sat in a coffee-room discussion of what it would mean for a subroutine to call itself. Questions raised but unanswered were whether recursive instances deserved to be deemed the “same” subroutine, and, if you could do it, what good would it be? It turned out you could do it: I programmed it for the IBM 704. Given the challenge, the now standard stack solution arose inexorably. But the question of what it was good for remained.
In the course of the lecture John introduced the usual basic list functions like copy, append and reverse (quadratic and linear), as well as tree manipulation. He went on to higher-level functions, demonstrating maplis and lambda. By the end of the hour he had put together a powerful little toolkit of functions which he used in his finale: symbolic differentiation of univariate expressions.
There it was — functional programming ex nihilo. McCarthy acknowledged IPL V and recursive function theory, but the elegant and practical face he put upon these antecedents was a work of genius. Nobody would ever again wonder what good it was to allow functions to call themselves. And it was all so clear one could go home and build it oneself without any instruction book.
Today, Lisp is about much more than list processing, and so people call it “Lisp”, deemphasizing the original meaning of list processing.
How do you view yourself: Are you a Lisp guy? A programming language guy who leans towards Lips? An agnostic?
I think Lisp is the archetype, the crystallized form of all dynamically-typed scripting languages, and for that reason I like it. I also like C and Haskell, because they’re similarly crystallized languages in their respective areas. I also find Java not too shabby, although larger Java apps should probably embed a scripting language interpreter, and you really want to steer clear of “best practices”. ;)
So far I’ve seen roughly three ways to get pulled into Lisp: (1) you’re an older, bearded guy who was there in the ’60 when Lisp came to be and never looked back (2) you studied computer science in school and learned to program in Scheme and never looked back, or (3) you’re a follower of the Graham (Paul, that is) who converted you and you never looked back. You’re too young to be in category 1, you didn’t study computer science, so, was it Paul Graham?
Yes, Paul Graham. I think Paul really kicked off the Lisp renaissance with his well-written and insightful articles, and I’m very thankful to him.
You work and worked on various Lisp implementations. What is the core feature that attracts you, what does it mean to be a Lisp?
I think the core idea of Lisp is doing the “Right Thing”. This has to be qualified a bit. Lisp’s dynamically-typed, object-oriented, reflective programming style is certainly not for anyone. Haskellers kind of abhor it. But once you embrace that paradigm — and for some applications it’s certainly the best — I think you’ll find that Lisp gets far more things right than other languages with a similar style.
For example, once Lispers found that object-orientation was a nice addition to Lisp, they ran with it, and built the Common Lisp object system, the “biggest and baddest” object system in existence. It features multiple inheritance, multiple dispatch (methods dispatch on all arguments, not only the first argument), methods defined outside classes, nice integration with functional programming, and is fully programmable via a metaobject protocol. This is clearly an instance of Lispers doing the right thing, no matter the costs.
The flip side of doing the right thing is that you also have to live with its consequences. For example, for Lispers “language-oriented programming”, i.e. the definition of custom languages, is second nature and very common. And to do that right, you have to use a very simple syntax, i.e. Lisp’s s-expressions, which is somewhat lacking in the aesthetics department. But Lispers accept that, because it’s clearly the right thing — you couldn’t do such powerful language-oriented programming in a language with a more elaborate syntax.
You say Lisp is not for everyone, who is it for, you think? And who should stay away from it?
I think Lisp is very widely applicable, so it’s hard to say who is it for exactly. In the words of Kent Pitman:
Two areas in which Lisp has been used, but in which I’d personally would choose something else is either extremely low-level stuff, like operating systems, or extremely high-level stuff like verified compilers. You can do these in Lisp, but I think you’ll be fighting a bit of an uphill battle in these areas. For low-level stuff, I’d just use C, and for high-level, verified stuff, I’d look into Haskell or even higher-level languages like Coq or Isabelle.
If Lisp is about doing “the right thing”, who decides? What’s the process there? Some languages have a benevolent dictator, this doesn’t seem to be the case for Lisp.
To stay with the dictator metaphor, Lisps seem to follow a “benevolent junta” model. There was a Common Lisp junta, and there’s something of a Scheme standardization junta. But I don’t know much about these processes.
One thing that’s different from most other languages is that every Lisp user is also a language designer. This means that evolution can happen quickly and community-based. Language is design is too damn hard for single humans, but even Lisp needed about 30 years until it got roughly its current shape with Scheme and Common Lisp.
There seems to be disagreement within the Lisp community as well — there’s not just one Lisp, there’s many: Common Lisp, Scheme, Clojure, Arc. While they all share the parentheses and macro systems, they don’t appear to be the same language at all. So, what is it that makes these languages Lisps?
Dave Moon, another central Lisp figure, is developing a new language called Programming Language for Old Timers (PLOT), and he says:
I say it is a dialect of Lisp because it uses a fully dynamic memory model, fully dynamic typing, a resident program semantics (although separate compilation is possible), fully powerful macros (but hygienic!), and because (almost) everything about the language is defined in the language itself. It has the same extreme flexibility and extensibility, and the same minimum of nonsense that gets in your way, that have always been hallmarks of Lisp.
Kent Pitman’s href=”http://www.nhplace.com/kent/PS/Lambda.html">Lambdathe Ultimate Political Party looks at this very issue and comes to the conclusion:
About language oriented programming, could you explain what you mean with that?
Language-oriented programming means that you not only write programs in the language, but also program the language itself, using macros. A premier example of this is Racket:
(from a reddit comment by Sam Tobin-Hochstadt)
Although it’s cool to be able to do that, a commonly heard criticism of Lisp is that it is too dynamic. You can change anything you want, I think in Common Lisp you can even adapt the parser at runtime. The question is: is this a good thing, can a programmer who is not familiar with your code still understand it if your code consists of little DSLs for anything. Recently I read a post by Rudolf Winestock on this matter: The Lisp Curse. What are your thoughts on this matter?
First, it’s simply a matter of good engineering. If you use these powers in a lousy way, yes, your code will suffer. But as href=”http://lambda-the-ultimate.org/node/1544#comment-18176">Luke Gorrie observed a while ago:
Powerful tools can be powerfully abused, but that doesn’t mean I don’t want them!
Second, regarding DSLs, I like to think of language statements like the WHILE loop as a “macro” that expands to more primitive statements, like IF and GOTO. Now, it’s self-evident that code using WHILE is clearer than code using IF and GOTO (and not because GOTO is harmful. GOTO rocks! :)).
I wouldn’t want to miss the power to extend my language’s syntax.
Where do you stand in the whole static vs. dynamically typed languages debate?
My experience is that when I use a statically typed language, especially one where types are easy to use, such as O’Caml or Haskell, I start to obsess about type safety, and want to express as many things as possible using types. I really get into a “typeful” mode of programming and it seems wonderful. And then I use a dynamically typed, scripting-style language, like Python or Lisp, the whole thing seems much less important, and I seem to get by just as fine.
That said, I’ve studied type systems a bit, and I must say that they really force you to think deeply about programming in general, and are very insightful. To paraphrase an old saying, even if you’re never going to use an advanced type system, you’ll be a much better programmer by learning about them.
And there’s some recent advancements in type systems that are plain awesome. For example, the language ATS can give you compile-time errors for very important stuff, like forgetting to close a file handle, or allocating an array of the wrong size. See href=”http://www.bluishcoder.co.nz/2011/02/26/reading-data-from-a-file-in-ats.html">this blog post by Chris Double. A related language is Tim Sheard’s Omega, about which I wrote href=”http://lambda-the-ultimate.org/node/4088">a post on Lambda the Ultimate.
Does a typed language help the programmer or hold it back?
One area where current type systems seem lacking is interactivity. In Common Lisp you can redefine everything at runtime. For example, you can add new superclasses to a class at runtime. I don’t know of any current static type systems that can handle that, so for an interactively extensible app like Emacs, they seem unsuited. But I’d like to be proven wrong, and I don’t think there’s any reason why future type systems should be unable to do this.
What is EdgeLisp and why are you developing it?
EdgeLisp is a not-uncommon Lisp, with some of the same features as Common Lisp: classes, multiple dispatch, standard Lisp control flow (nonlocal exits, UNWIND-PROTECT), a numerical tower.
One thing I’m proud of is how EdgeLisp implements a hygienic DEFMACRO: by putting support for hygiene deep into the compiler, hygiene support is only a couple of dozen lines of code. I want to write a paper about that one day.
The Eich tarpit isn’t much different from the Gosling tarpit. You gnash your teeth a lot, but if you stay within the confines of what’s offered by JS, it’s an acceptable target language for a compiler. And being able to run your language everywhere JS runs is wonderful.
* [The Axis of Eval](http://axisofeval.blogspot.com/)
* [Manuel on twitter](http://twitter.com/msimoni)