I think I finally understand what Alan Kay is saying about Lisp
Joe Marshall recently wrote a series of very entertaining blog posts telling the story of his first experiences with Lisp:
I had a similar experience with Lisp - I didn't go to MIT or take 6.001, but reading the first couple of chapters of SICP changed my entire outlook on computer programming. By the time that happened (the summer of 2002, right before I started university), I was already somewhat proficient with C, although I wasn't particularly interested in computer programming as an activity in and of itself.
The funny thing is I was exposed to Scheme my first year of high school through Mike Zamansky's MCS1 at Stuy. That experience wasn't particularly memorable, even less so was one of the other programming languages used in the course (Python, I think?). On the other hand the course also taught a whole bunch of really neat stuff in StarLogo, which as they say in Canada was the cat's ass.
An even more amusing fact is that after going to see the professor before the start of the first term and taking the challenge exam that would give me credit for the first computer science course, I decided that since the university computer science courses didn't look anything like SICP, I would probably be better off investing my time in a math degree. The algebra and calculus examples in SICP were my first exposure to elegant computational mathematics, and a key reason for choosing math and not something else.
What does this have to do with Alan Kay?
Yesterday before falling asleep I was plagued by the question: why don't I enjoy using Haskell as much as Lisp? Haskell is purely functional, it features a really neat and unobtrusive type system, which when taken together with the great pattern-directed invocation programming style provides a better alternative to object-oriented programming. It's a really great programming language.
Could the reason be the ugly syntax? No, that's easy to fix. The type system doesn't bother me. There's a template system for both Haskell and Liskell syntax that provides preprocessor-style macros, but monads actually provide a much more powerful way of building control abstractions than macros do.
The only thing I could think of that Lisp has that Haskell can't provide is a dynamic runtime system. It's as if Lisp comes with a really neat invisible virtual machine.
Among the many nice things Alan Kay says about Lisp, the term late binding comes up often. Practitioners of programming languages that Alan Kay did not have in mind when he invented the term "object-oriented" have taken it upon themselves to equate this to the concept they term dynamic binding, which they define to be type-directed dispatch done at runtime.
This is a textbook example of cargo cult programming. Dynamic types and runtime dispatch are not axioms of Lisp and Smalltalk, they are the emergent properties of self-describing computational systems. The key is that the definition of Lisp is written in Lisp. The "invisible virtual machine" in Lisp is Lisp. That is what Alan Kay means by late binding.
Well, this insight did take almost seven years, but at least now I have a snappy label for the dynamic languages du jour whose existence helps put a lot of computer book publishers' editors' kids through college and kills hundreds of thousands of trees every year: cargo cult programming languages.
The question that remains unanswered is why isn't Haskell self-describing? The reason is that there are actually two computational systems behind Haskell: one for the language itself, and one underlying the type system for the language. If type systems could be expressed in a metacircular way by the systems they were trying to type, they wouldn't be able to express anything about types (restrictions on computational power) of that system.
You can read more about the metacircular definition of Lisp in several books. The first of these is the original Lisp 1.5 Programmer's Manual; Alan Kay has called the metacircular definition of Lisp provided there "Maxwell's Equations of Software." SICP includes a section on constructing a metacircular evaluator, and a whole chapter based around extending that evaluator with non-determinism and logic programming. The most thorough examination of the metacircular evaluation formulation of Lisp is present in John Allen's highly recommended 1978 book, Anatomy of Lisp.
4 comments:
Would you elaborate a bit on the discrimination of dynamic- and late-binding you talk about?
That view does not seem to be widely spread. :-)
I am interested...
According to Wikipedia:
"In object oriented programming, dynamic binding means determining the exact implementation of a request based on both the request (operation) name and the receiving object at the run-time."
This is just standard OO dispatch. What Alan Kay refers to as late-binding relates to self-describing systems in that the dispatch mechanism itself is expressed in terms of metaobjects. That's exactly the same as the eval-apply metacircular evaluator in Lisp. This is what Alan Kay means when he says that Smalltalk and (the original) Lisp are the only two languages he knows of that are late-bound.
The beauty of Lisp is, many people said, similar to the beauty of set theory: it is very "powerful", or "expressive", but it is also very small and simple. So small and simple that people can remember it - and start thinking backward - i.e. not only "what can be done with Lisp", but "how to make its core even simpler." Simplicity is the reason why set theory is typeless - all entities are of the same kind: sets.
It is interesting (and relevant for this post) that Alan Kay also thought about that.
You said that "invisible virtual machine" is what Alan Kay says by late binding. But many scripting languages (such as Perl, Python) also have the interpreter which is the virtual machine. Does that mean these languages all have late binding?
Post a Comment