February 26, 2009

CL-USERs map

If you have a Google account and use Common Lisp, there is a CL-USER Google map you can annotate with your presence. This can be a useful tool in organizing local user groups.

February 23, 2009

ILC 2009

As a public service announcement, I'd like to remind readers that the early registration period for the 2009 International Lisp Conference, to be held at MIT, ends in a week. Register today! The conference is a month away, I am quite excited about it.

GWT and deferred binding

I was going through last year's Google I/O conference videos and noticed a few GWT-related ones. One in particular seemed interesting: Faster-than-Possible Code: Deferred Binding with GWT by Bruce Johnson.

The technique that GWT dubs "deferred binding" consists of generating feature-specific output code (JavaScript) from a common codebase (Java for GWT, Lisp for Parenscript) and serving it up under different URLs so the resource are cached properly.

I first encountered the idea in the spring of 2007 when Daniel Gackle proposed it as a way of efficiently handling the generation of feature-specific JavaScript code from Parenscript, and we implemented it in a few days.

The system would generate several pages from a common definition written in CL-WHO/Parenscript (we used it to generate browser-specific code and profiler-instrumented code, so in all we'd get a cartesian product of (ie, ff) x (regular, profiled) as output), and serve them up under different URLs that also incorporated a version number (via a mechanism that linked source files to generated resources and tracked the file modification date). This approach generated perfectly cacheable HTTP resources and ensured trouble-free application upgrades.

Something interesting that I noticed when developing the feature-specific code generation mechanism was its resemblance to context-oriented programming: each feature acts as a layer which affects the way that the compiler produces code.

Besides browser-specific code generation, in GWT the technique is used for generating locale-specific resources, and by virtue of its implementation, to provide an extremely obtuse way to emulate Java's broken metaprogramming facilities (see http://www.zenika.com/blog/wp-content/uploads/2007/08/tutorial-binding-en.pdf, for example).

In the intervening time I have thankfully learned a little more about programming for the web, and have come to the inevitable conclusion (if I had started reading comp.lang.javascript sooner, I could have avoided making this mistake - if you're not reading that group yet, start now) that using the approach to generate browser-specific code is fundamentally flawed and will almost certainly ensure that your code will break on browsers that you have not developed for. Working to add support for new browsers to this scheme will not only waste an incredible amount of time that you would not have had to spend at all otherwise, but will actually make your code more brittle and harder to maintain.

The only viable approach to dealing with differences in browser capabilities is feature detection. I can only say that I am glad that Parenscript is not pushing a "framework" on anybody, so my ignorance only impacted one project. Generating feature-specific code is not in itself a bad idea - GWT uses it to also generate locale-specific resources, with the corresponding bandwidth savings and cacheability advantages. I can only hope for the sake of their users that the GWT developers repent and change their stance on generating browser-specific code.

February 16, 2009

President's day Parenscript release, and a little on how I deploy web apps

It is President's day in the US of A, and that means a new release of Parenscript. Be aware that this one may break your code.

In other news, a couple of weeks ago I decided to sign up at stackoverflow, a community Q&A site for programmers. It has member-driven moderation, good search and tagging facilities, but a much wider scope and lack of (for lack of a better term) "narrative" than Usenet or mailing lists or message boards. The first means that spam, trolling and off-topic messages are kept under control, but the third means you can't participate in the site like you do Usenet, which will hopefully be offset by the second, which means that you can use the site to find answers more effectively than searching Usenet archives and leave your contribution to answering questions that you have some knowledge of.

So far I've answered one question about deploying Lisp web apps. I'm thinking of expanding it in more depth and adding examples and turning it into a blog post ("article" in the old media parlance). Which naturally leads me to remark on my own online community participation: I've stopped reading Usenet and participating in community sites a few years ago, and now follow blogs exclusively. Narrative, personalization, and Internet etiquette.

February 9, 2009

Web browser fun

The diagram below summarizes my recent cursory survey of web browsers. I think the one important conclusion that can be drawn is this: Webkit will be critical in the future. I know at least one person who doesn't have a computer, but uses Facebook from her mobile phone. There will be more and more people like that. Another important factoid: IE6 isn't dead. IE Mobile 6 will use the JavaScript implementation of IE8, but the layout engine is based off of IE6. That will probably induce nausea in most web developers, but it makes me glad I still take care to develop my web apps to run in IE6.



Other fun things:



The repository version of Parenscript will probably break your code, because your code probably deserves to get broken.



I've released a new version of uri-template, which fixes a bug in how URI-encoding was being performed.

February 2, 2009

Learn programming through JavaScript

I found a link to Eloquent JavaScript on some website shortly before posting the last blog entry about PCall, and just realized that it's written by the same Marijn Haverbeke. Interesting coincidence.

From a quick skimming, the book itself appears aimed at those new to programming. I like the style and the exercises used, and the coverage of JavaScript appears good without reading like a spec. The books also covers everything you need to know to get started building web applications. If someone wanted to learn programming by jumping straight into the web (a somewhat prudent thing to do in this day and age) I would recommend this book alongside SICP, it's certainly a lot better than most of the other JavaScript learning resources I've encountered.

In other Marijn Haverbeke-related JavaScript news, he also has Lisp implementations of a JavaScript parser and a JSON library. The former provides another way besides jwacs to get JavaScript code into Lisp, while the latter provides an alternative to CL-JSON.

Parenscript tricks and parallelism

The new version of Parenscript features a user-definable obfuscation facility. Among other amusing things, it can be used (due to JavaScript's under-utilized support for Unicode identifiers) to make your code Asian:


(ps:obfuscate-package "LAMBDACHART"
(let ((code-pt-counter #x8CF0)
(symbol-map (make-hash-table)))
(lambda (symbol)
(or (gethash symbol symbol-map)
(setf (gethash symbol symbol-map) (make-symbol (string (code-char (incf code-pt-counter)))))))))

LAMBDACHART> (ps (defun foo (bar baz) (+ bar baz)))
"function 賱(賲, 賳) {
賲 + 賳;
};"


Unrelated, I recently found Marijn Haverbeke's PCall library for parallelism in Common Lisp. The library provides futures (called 'tasks') as parallelizing mechanism, and thread pool (the library is based on bordeaux-threads) management facilities to tweak how the futures are actually executed.

Unlike MultiLisp, which implemented the same futures-based parallel model, there is no macro provided to evaluate a function's arguments in parallel before applying the function to them. That seemed to be a popular facility in the parallel research Lisp systems of the 80s, probably because it is a no-brainer once you consider the Church-Rosser theorem, however upon some reflection and a little coding that construct proves to be not very convenient.

I think the futures approach to parallelism is the most widely useful model available today. It shares all of the conceptual benefits of its cousin delayed/lazy evaluation: futures are declared and used explicitly in the code, without forcing (pun fully intended) any contortions in the control flow of the code using those futures. If you can write a function, then you can define a task that can be executed in parallel.

The model doesn't handle concurrency control beyond the synchronization provided by joining/forcing the future, so if your tasks share state (although you should be writing your code to do the synchronization in the code making and consuming the tasks, so that they don't share state), you'll need to do the synchronization yourself (this is where you take advantage of locks provided in bordeaux-threads).

One interesting thing about the library is Haverbeke's extreme pessimism about native thread overhead (the default thread pool size is 3). On many systems that is certainly justified, but apparently some half-decent OS implementations exist. I'm interested in doing some benchmarks with SBCL using NPTL threads on an AMD64 box to see what kinds of numbers are reasonable.