February 2, 2009

Parenscript tricks and parallelism

The new version of Parenscript features a user-definable obfuscation facility. Among other amusing things, it can be used (due to JavaScript's under-utilized support for Unicode identifiers) to make your code Asian:


(ps:obfuscate-package "LAMBDACHART"
(let ((code-pt-counter #x8CF0)
(symbol-map (make-hash-table)))
(lambda (symbol)
(or (gethash symbol symbol-map)
(setf (gethash symbol symbol-map) (make-symbol (string (code-char (incf code-pt-counter)))))))))

LAMBDACHART> (ps (defun foo (bar baz) (+ bar baz)))
"function 賱(賲, 賳) {
賲 + 賳;
};"


Unrelated, I recently found Marijn Haverbeke's PCall library for parallelism in Common Lisp. The library provides futures (called 'tasks') as parallelizing mechanism, and thread pool (the library is based on bordeaux-threads) management facilities to tweak how the futures are actually executed.

Unlike MultiLisp, which implemented the same futures-based parallel model, there is no macro provided to evaluate a function's arguments in parallel before applying the function to them. That seemed to be a popular facility in the parallel research Lisp systems of the 80s, probably because it is a no-brainer once you consider the Church-Rosser theorem, however upon some reflection and a little coding that construct proves to be not very convenient.

I think the futures approach to parallelism is the most widely useful model available today. It shares all of the conceptual benefits of its cousin delayed/lazy evaluation: futures are declared and used explicitly in the code, without forcing (pun fully intended) any contortions in the control flow of the code using those futures. If you can write a function, then you can define a task that can be executed in parallel.

The model doesn't handle concurrency control beyond the synchronization provided by joining/forcing the future, so if your tasks share state (although you should be writing your code to do the synchronization in the code making and consuming the tasks, so that they don't share state), you'll need to do the synchronization yourself (this is where you take advantage of locks provided in bordeaux-threads).

One interesting thing about the library is Haverbeke's extreme pessimism about native thread overhead (the default thread pool size is 3). On many systems that is certainly justified, but apparently some half-decent OS implementations exist. I'm interested in doing some benchmarks with SBCL using NPTL threads on an AMD64 box to see what kinds of numbers are reasonable.

1 comment:

marijn said...

Actually, on SBCL on a 64-bit Linux system, spawning a new thread for every process is only about three times slower than creating a PCall task (though it also makes the joining a bit more annoying). I'm not particularly pessimistic about thread overhead, I just did't want to waste any memory. People will have to think aboug specifying their own thread-pool size anyway. (Depending on the hardware, whether tasks do blocking I/O, etcetera.)