December 16, 2010

Лисп как клей

Перебежчик с Common Lispа на Clojure zahardzhan (Роман Захаров) публиковал отчёт о своей первой коммерческой программе в Clojure. Его тезис: "реальные программы — это комья грязи; липкая гадость которая склеивает энное количество технологий, семантик и языков."

Понятие "glue language" - "язык-клей" началось с Tcl и Perl тк они обычно использовались для обработки и послания данных между разными, уже существующими системами ("cклеивая" их вместе). В принципе Лисп полностью годен как клей, но мешало плохая интеграция с операционной системой, неполноценные FFI, и отсутствие библиотек для парсинга разных форматов и интерфейсинга с базами данных и сетевых сервисов. Сейчас первые две проблемы во многих своих образах устранены, и количество библиотек для интерфейсинга все растет. Главные задачи что бы сделать Common Lisp годным как клей, это разработка новых библиотек для интерфэйсинга, и обновление и пиар существующих.

December 15, 2010

Understanding Web in the context of the medium/message dichotomy

Web is not a medium. Web is a space of possible combinations of medium, message, and method. The medium and the message are familiar, but what exactly is a method? Methods are algorithms and effects. Previous media had no algorithmic component, no way to conditionalize or effect outcome (choose-your-own-adventure books came closest).

Web IS NOT interactive. It is a two-way audience: every viewer is a publisher. But the websites themselves aren't publishers, they're the medium. Every website is a new medium.

December 11, 2010

Programming language evolution

Programming language evolution is sometimes claimed to be a search for greater abstraction.

This is wrong, and mirrors the misunderstanding of biological evolution. There is no end-goal, no such thing as progress or improvement. It's about adaptation.

There are different kinds of models and different kinds of environmental factors, such as machine constraints, and how ignorant of mathematics and past programming languages the designers of a programming language are.

A good example of this is MapReduce. The programming paradigm goes back to APL. Limited support for parallelism was added in the form of vector co-processors on IBM mainframes. The first "massively" parallel implementation of the idea was StarLisp.

Another example is Objective C. Brad Cox claims that the only reason he developed Objective C was because at the time Xerox wouldn't sell Smalltalk (Masterminds of Programming, p. 258).

In terms of parameters like dynamism and introspection, Lisp already had all the necessary features in 1958. From there the same basic idea of lambda abstraction has been applied to Actors and prototype-based languages like Self and JavaScript, but in terms of other "dynamic" programming languages, almost all of them are a step backward from 1958 Lisp.

The same thing can be said about C and Algol 68.

December 1, 2010

Gaming academia, or why style is more important than substance

I started with failure. I submitted a couple of papers to conferences and then waited on pins and needles for months, only to be rejected. It was disappointing and motivating at the same time. I realized there was something fundamentally wrong in my writing strategy. At the same time, some of my counterparts at Microsoft were publishing prolifically, and I asked myself, "What is it about their papers that gets them accepted so regularly?" I printed several of their papers looking for a pattern, and I found it. Every one of their papers used the same template and certain stylistic elements. I started using a similar formula and found my papers getting accepted almost immediately. My team submitted papers to some important venues - SIGMOD, VLDB, ICDE, and others - and had 90% of everything we submitted accepted, even in venues in which the acceptance rates were 1 in 6 or 1 in 9. Between 2001 and 2007, my team published more than 35 papers. We became incredibly efficient at it, authoring professional papers in just a few days, almost always using ideas and experimental results we had on hand from our regular line item development work.

--Sam Lightstone, Making it Big in Software

November 18, 2010

Free Software license onion

There are tons of Free Software licenses out there, and it can be confusing choosing one for your Lisp project. I'm writing this guide to clear up some confusion, but be aware that I'm not a lawyer and you probably should consult one if you are going to take advantage of this advice for something that you hope to commercialize.

A good strategy for choosing a license is to consider how the project would fit in a software stack - is it a general-purpose utility library? A library to interface to specific services or hardware? A web server that can be used to build applications? A content management system that you one day hope to commercialize? These projects all have different potential uses, but it would be nice if their licensing terms reflected those uses and allowed the content management system to use the web server, allowed the web server to use the service interface library, and allowed the service interface library to use utilities from the general-purpose library.

I think general-purpose utility libraries should be public domain (consider using the unlicense, which is a copyright waiver with a warranty discalimer). One thing people often want to do is copy-paste one or two functions out of them into random projects. Other good things to put into the public domain include benchmark code and test suites (you can release just those parts of your project as public domain).

Next up on the hierarchy are permissive licenses. There are dozens of these around, however I think only those that are LLGPLv2.1 compatible should be used for Free Software written in Lisp (Wikipedia has a chart summarizing Free Software license compatibilities).

There are a few of these (see the GNU license list whether a particular license is GPL-compatible or not), but generally most of them are like the BSD license. I recommend using the ISC license, which is really concise and is used by OpenBSD. Good examples of projects that can be licensed under a permissive license are parsers for data formats and interfaces to C libraries or web services. Permissive-licensed software can include public domain code and code from most other permissive-licensed software.

Projects you do not want to see forked into a closed-source product without giving back to community should be licensed under the LLGPLv2.1 (version 2.1 specifically, as LGPLv3 code cannot be used in a GPLv2 project). Why the LLGPL and not the LGPL or the GPL? The GPL makes it impossible to use a Lisp library as part of a closed-source product (even if you only use it as a library and make no modifications), and the wording of the LGPL does likewise because the "linking clause" basically presumes you're using C.

LLGPL software can incorporate other LLGPL code, public domain code, and LLGPL-compatible permissive license code, but for example LLGPL code can't be put into an ISC-licensed project (the whole project would have to be re-licensed under the LLGPL).

I think it's pretty obvious that you shouldn't license your Lisp library under the GPL if you want other people to actually use it, but how to decide between a permissive license and the LLGPL? I think that aside from practical considerations, many times it comes down to a moral choice. LLGPL forces sharing reciprocity on others. I believe copyright should be abolished or severely reformed, and until those changes start taking place, the LLGPL can be used to similar effects through the impact of works on which you own the copyright (the real pirates aren't the people downloading mp3s, they're the people violating the GPL).

I think the most interesting licensing situation is when you want to develop commercial software, but allow proprietary use and extension only by certain parties (hopefully ones which will pay you lots of money). The general strategy for this is to dual-license your code under the GPL and a commercial license. The most prominent example of a project using this strategy is MySQL.

One implication of this is that in order to make it into your official repository to be dual-licensed, any patches must have their copyright assigned to you/your company by their contributors.

Which version of the GPL to choose for dual-licensing? I think either v2 or v3 is fine (one important thing v3 includes is the anti-tivoization clause, which prevents closed hardware platforms).

One thing the GPL doesn't cover is running proprietary modifications of GPLed software as a service without any distribution going on (for example, Google is rumored to have Linux kernel patches running that haven't been released). The AGPL addresses this issue. I don't know of any software dual-licensed under the AGPL, but I think it can be a promising strategy for a variety of projects.

November 16, 2010

Character encoding is about algorithms, not datastructures

One thing you might be aware of is that both SBCL and Clozure represent characters using 4 bytes. There's been significant discussion about this already, but I hope I can offer more insight into how you can apply this to your Lisp applications.

First, the one thing most people seem to agree on is that UTF-16 is evil (it should be noted that both CLISP and CMUCL use UTF-16 as their internal character representation).

The important thing about UTF-32 vs. UTF-8 and UTF-16 is that it is not primarily a question of string size, but of algorithms.

Variable-length encodings work perfectly fine for stream algorithms. But string algorithms are written on the assumption of constant-time random access and being able to set parts of a string to certain values without consing. These assumptions can't be satisfied when variable-length encodings are used, and most string algorithms would not run with any level of acceptable performance.

What about immutable strings? Random access is still not constant-time for variable-length encodings, and all the mutator operations are gone. In effect most immutable string algorithms actually end up being stream algorithms that cons a lot.

Chances are, you're already using stream algorithms on your strings even if you're not aware of it. Any kind of search over a string really treats that string as a stream. If you're doing string concatenation, you're really treating your strings as though they were immutable - consider using with-output-to-string to cut down on consing and to simplify your code.

One thing that is special about UTF-8 is that it is the de-facto character encoding standard of the web. When you're reading UTF-8 into a Lisp string, what you're really doing is decoding and copying each character.

Most web application patterns are based around searching for and extracting some string from the HTTP request, and either using that string as a retrieval key in a database or splicing it together with some other strings to form the reply. All these actions can be modeled in terms of streams.

One of the things that makes John Fremlin's tpd2 fast is that it dispenses with the step of decoding and copying the incoming UTF-8 data into a Lisp string (this is also what antiweb does). Using some compiler macros and the cl-irregsexp library, all the searching and templating is done on byte arrays that hold the UTF-8 encoded data (Lisp string literals are converted to UTF-8 byte arrays by compiler macros). The result is a UTF-8 byte array that can be sent directly back to the web browser, bypassing another character encoding and copying step.

I read somewhere that the Exokernel operating system permitted extending this concept further down the software stack by allowing applications to send back pre-formatted TCP/IP packets, although I don't know if that's actually possible or how much of a speedup it would give.

In addition to skipping the overhead of copying and re-encoding characters, working on UTF-8 byte sequences directly means you can use algorithms that depend on working with a small alphabet set to achieve greater performance (an example of this is the Boyer–Moore–Horspool string searching algorithm).

November 7, 2010

uri-template, Eager Future и Руби, и ещё немного бреда о Лисповой операционной системе

Выставил uri-template и Eager Future на github:

https://github.com/vsedach/uri-template
https://github.com/vsedach/Eager-Future

uri-template теперь использует named-readtables, которое дает возможность использовать readtables и reader macros аналогично packages. Рекоммендую named-readtables всем кто использует read-macros, и даже всем кто их не использует из за readtable-case :invert (самый перспективный метод иметь регистры типа верблюжих в символах).

План перевести uri-template на LLGPL (была лицензированной под BSD), а Eager Future переписать как совершенно новый проект с довольно уникальными свойствами среди конкурентных библиотек (тоже под LLGPL, но сначала надо додуматся как перебороть SBCL, там что-то вроде бага со слабыми ссылками - подробности (англ)).

Джастин Грант показал как можно реализовать компилятор с Ruby на Лисп (англ). Я наконец сошел с ума и тоже хочу сделать компилятор на Лисп, только из C. Зачем? Что бы взять NetBSD, скомпилировать его драйвера на Лисп, и получить Лисповую операционную систему (я же сказал я сошел с ума). Подробности на Hacker News (англ). Проект вот здесь, для начало взял Zeta-C, C компилятор для Лисп-машин, но уже видно что почти всё придётся переписывать. Для начала, может кто нибудь знает, существует ли рабочий Лисповый парсер C (в Zeta-C противная дрянь генерирована yacc-ом времен СССР и переведенная в Zetalisp)?

October 26, 2010

Protect the flock (or how to mitigate the effects of HTTP session hijacking without using SSL)

So this thing called Firesheep is making a splash the past couple of days. It's a Firefox plugin that lets you hijack HTTP sessions on open WiFi networks (which presumably are NATed, hence trying to use the client IP address as extra authentication doesn't work).

It's possible to mitigate the scope of this kind of session hijacking significantly without resorting to tricks that break the back button. A note about what "mitigate" means here: we're on an open network, so any traffic from the server to the client can be spied on, and since we don't want to break the back button by using one-time information, any GET request that is made by the client can be spied on and repeated by the hijacker. However, the hijacker won't be able to make GET requests to URLs that she hasn't yet spied on, and will be unable to make POST requests.

So how does this work? I'm assuming the initial login is done over HTTPS, during which time the server sends the client a shared secret that the client then stores locally (window.name, Flash LSO, or localStorage). When a client wants to GET a URL, she rewrites it (this can be done using JavaScript on onload, for example) with a query parameter that is a hash (SHA1 in JavaScript, for example) of the secret concatenated with the URL. The server then verifies the hash before responding to the request. POST requests are handled by performing the same hashing on unique server-generated form identifiers (you're already using these to prevent duplicate form submissions, right?).

That wasn't so hard. Let me know if you see any problems with this scheme, or know of a better way.

October 19, 2010

Various Lisp news

I've put uri-template and Eager Future on github and did some work on both.

uri-template got a new release, and now uses named-readtables to provide a modular way to use reader macros. I think named-readtables is a really big deal; if your library defines reader macros, start using named-readtables today. Once it is widely adopted, named-readtables can be used to facilitate things like a global move to readtable-case :invert.

I'm working on some really interesting features for Eager Future, but I'm getting stumped by how finalizers and thread-interrupt interact in SBCL. Any help appreciated (I've posted the problem description to sbcl-help).

I've mentioned CL-JavaScript before, and it's cool to see similar projects. Justin Grant did a toy Ruby to Common Lisp compiler that (no surprise) is a lot faster than Ruby at calculating factorials. The source code is a good illustration of why Common Lisp is the ultimate language implementation language.

In local news, Montreal Clojure user's group will be hosting their first meeting October 26 (details).

In less Lisp-related news, Foulab is hosting a demo party November 27.

August 15, 2010

Input needed

Recently I've been doing a major cleanup/arson of CLiki. The cleanup effort was inspired by recurring comments on Hacker News of the form: "I didn't find Lisp libraries" or "I couldn't decide which libraries to use."

I would love for everyone to add a description of their Free Software Lisp libraries to CLiki with ASDF-installable tarballs and appropriate topic markers so libraries are easy to find and compare (maybe do it for cl-user.net first; I'm entertaining the idea of writing a scraper that would auto-generate new CLiki pages from cl-user.net entries). Currently this does not seem to be very realistic.

Instead I'm going to ask people who read Planet Lisp (where this blog is syndicated) to contribute to two specific tasks:

  • Go through the list of utilities packages on CLiki (add a CLiki entry with the *(utilities) tag for any utilities packages you know that aren't in the list) and add a description of what they contain to the package CLiki page.
  • Contribute to the Great Macro Debate CLiki page.

Right now there are almost two dozen "utility" CL packages offering everything from my-defun*$%# to map-nthcadadar (that's a joke, but only a slight exaggeration). It's quite hard to decide what to choose why. To me cl-utilities seems to be the most sane package, but it hasn't had development since 2006 (maybe it doesn't need it?). Disclaimer: I use kmrcl and Anaphora in my software.

The Great Macro Debate was a round-table at ILC 2009 that asked the unaskable: are macros evil?

To most post-Y2K Lisp programmers (like me) this seemed ridiculous. We grew up on Paul Graham's kool-aid. Macros are powerful, macros are awesome, they are special, don't use them when you can use a function, everything will be great.

But then all these experienced Lisp programmers came out at ILC and said that macros are bad for software maintenance. How can this be? In the absence of concrete examples, the cognitive dissonance was too great. The only defense mechanism was to tell yourself "bad programmers don't understand macros" and move on.

Two events changed my point of view. The first was encountering defclass* in a commercial project. The second was working with the TPD2 networking code and encountering my-defun (and you thought I was joking?). I came face-to-face with macros that made software maintenance hard.

The point of the Great Macro Debate CLiki page is to collect all relevant information as to why to write macros. That information will then get distilled into a sort of macro style writing document, which will contain examples and recommendations of how to write relevant macros, and what kinds of macros not to write, to make software maintenance easier.

JavaScript и Лисп

Недавно появился целый ряд новых проектов обеспечивающих совместимость JavaScript c Common Lispом:

Ред Дэйли сделал Лисп байндинги для мозилловской SpiderMoney, CL-SpiderMonkey. Ещё у него есть форк SLIMEа который делает возможным вещи как автодополнение для Parenscript кода и тп. На данный момент возможности расширения сервисов SLIME под DSLы не очень хорошо поддержаны, но archimag обещал сделать все как надо.

Некий rb разработал SWANK прокси и форк Parenscriptа которые вместе позволяют подключить SLIME к браузеру (!) через web sockets. Так же он работает над компилятором с Лиспа на Флэш.

Marijn Haverbeke, Alan Pavičić, и Iva Jurišić решили реализовать JavaScript компилятор в Лиспе. Местами он уже более производителен чем SpiderMonkey.

July 18, 2010

Put JavaScript in your Lisp and Emacs in your JavaScript

This month, Red Daly announced cl-spidermonkey, a set of Common Lisp bindings to Mozilla's SpiderMonkey JavaScript implementation.

Not long after that, I learned about Marijn Haverbeke, Alan Pavičić, and Iva Jurišić's CL-JavaScript JS to CL compiler. Apparently it's already faster than SpiderMonkey.

Not content with just having JS in CL, Red Daly also has a version of SLIME that integrates Parenscript to provide things like symbol completion (get it on github: http://github.com/gonzojive/slime).

A little while later another surprising discovery occurred: a certain 3b hacked up a SLIME proxy and a fork of Parenscript that lets you run a SLIME/Parenscript REPL in a browser using WebSockets. Apparently this all happened in a couple of days as part of the The 2010 Lisp Game Design Challenge.

Unrelated but still cool, 3b also wrote a CL to Flash bytecode compiler.

July 14, 2010

Book review: Nicholas C. Zakas, High Performance JavaScript

Nicholas Zakas' High Performance JavaScript is a collection of tips and techniques for improving JavaScript web application performance. Although everything contained in the book has been published before, this is the first time the information has been compiled into one concise reference.

One of the highlights of the book is the discussion of reflows and interactions between JavaScript code and the browser UI. This area of JS-browser interaction is frequently overlooked by web developers, but is frequently responsible for perceived performance problems. Bringing this area of JS performance to popular attention may be book's most valuable contribution.

High Performance JavaScript is not a book to learn JavaScript from, and its weakest points are when it tries to review JS principles. The book attempts to provide explanations of JS semantics and implementation techniques, but the descriptions are inaccurate at best, wrong at worst. The explanation of JS scoping rules in particular is misleading and at times erroneous.

The book also includes dubious language-agnostic techniques like Duff's device and backward counting loops. Without guides to profiling, performance tuning principles, and an understanding of JS implementations, all of which the book lacks, this kind of advice will cause more harm than good in the hands of novice JS programmers, and its inclusion into an introductory book on JS performance tuning is questionable. The same can be said for the IE6-specific performance hacks mentioned throughout the book.

While the book has good coverage of contemporary JS profiling tools, it does not attempt to teach approaches to profiling and identifying bottlenecks. Also missing are tips on isolating sources of DOM access and reflow penalties.

Another thing High Performance JavaScript is not, is a guide to those working on JS implementations - there is no data on the JavaScript feature use and performance bottlenecks of contemporary JS web applications and libraries.

High Performance JavaScript is a concise collection of wide-ranging information on improving the performance of JavaScript web applications. However, it should be read with a solid understanding of JS and knowledge of general techniques for identifying and addressing performance problems. Recommended as a reference for anyone writing web applications.

July 13, 2010

Лисп семинары в Северной Америке

2010 семинар Scheme и функционального программирования состоится в Монреале 21-22 августа. Коллеги будут вести презентацию доклада о JazzScheme. Планирую присутствовать, и выставить впечатления здесь.

2010 международная Лисп конференция будет в Рино, США 19-21 октября. В этом году не собираюсь.

July 5, 2010

Lisp and JS events

Heads-up on some upcoming Lisp and JavaScript events:

Tuesday July 13, the Montréal JavaScript User Group is having a meet-up. James Duncan of Joyent will talk about why software sucks and JavaScript is the end of programming language history. Laurent Villeneuve will demonstrate idiomatic use of closures.

The 2010 Workshop on Scheme and Functional Programming is taking place at the University of Montreal August 21-22.

ILC 2010 will be taking place October 19-21 in Reno, Nevada. Abstracts are due by August 1. I'm not planning on attending.

July 1, 2010

Новый Лисп стартап

Недавно Канадский Лисп-программист Уоррен Уилкинсон объявил о своем новом проекте - FormLis. FormLis состоит из комбинации вики и системы генерации веб-форм из простейшей разметки (пример), а так же безсхемной базы данных поддерживающей генерированные формы.

Уоррен решил описать конструкцию и использование встроенного Форт (Forth) компилятора в Лиспе (англ.). Интересно что Дуг Хойт посвятил главу своей книги Let Over Lambda конструкции встроенного Форт компилятора, но увидеть такое в веб-приложении неожиданно и оригинально.

June 30, 2010

Lisp in startups

Calgary-area Lisper Warren Wilkinson recently launched FormLis, a Lisp-powered web application that combines a wiki with an ingeniously easy way to create custom forms (backed by a schemaless database that automatically manages form definition changes). One of the interesting technical details is Warren's use of an embedded Forth to Lisp compiler. Doug Hoyte's Let Over Lambda (read my review of the latter) features a Forth to Lisp compiler as a case-study chapter, but it's interesting to see one used in a real application.

In other news, ITA Software is trying to get acquired by Google for a billion dollars.

May 2, 2010

Postmodern programming

Before the idea of postmodern programming can begin to be investigated, the question of whether anything like modernist and classical programming even exists needs to be asked.

It's not surprising that the question of how to define postmodern programming is reframed by the OO contingent (reflecting their ignorance as much as their immature self-obsession) as literally "what comes after object-orientation?" That is a silly question to ask when you don't know what object-oriented means or what came before.

James Noble and Robert Biddle attempted to address the issue in their Notes on Postmodern Programming, but focused on the act of writing programs and left the question of programming paradigms unexamined.

One possible interpretation gives surprisingly straightforward definitions: the development of the idea of algorithms constitutes the age of classical programming, while procedural and data abstraction, being the rationalization of the construction and application of algorithms, constitute modern programming.

What then is the narrative of a modern program? The evaluation strategy. Algorithms are executed in steps. Modernist programming promotes rationalization in laying out these steps. Postmodern programming rejects the linear evaluation strategy.

Curiously, we can arrive at the same conclusion by framing the relationship between procedural and functional programming in terms of a dialectic:

The thesis of procedural programming is the description of programs in terms of time (sequential execution of instructions, or branching) and behavior (the semantics of instructions), as entities operating on data - object identity, and state of addressable (nameable) places (registers, variables, arrays, etc.).

Functional programming presents a (quite literal) antithesis: programs are described in terms of data - identity (named functions) and state (first-class functions) - and operate on time (persistent/immutable data structures) and behavior (monads).

The synthesis is a nondeterministic, reified (homoiconic), first-class program. The program exists for the sake of itself, becomes the object of study and center of efforts. The computational narrative of the evaluation strategy escapes control, and so from the point of view of the programmer becomes irrelevant.

How, when, and why certain parts of the program are evaluated becomes subjective.

Perl wasn't the first postmodern programming language; Prolog was.

Non-determinism as the rejection of computational narrative and the reification of time arise naturally from physical constraints when attempting to reason about concurrency. Most of the modernist concurrency techniques concern themselves with maintaining what I call the global state illusion, or quite literally forcing a single narrative on a distributed system. Not only does the current state of the system remain unknown and unknowable, but its past history permits an exponential number of equally valid, relative interpretations.

Rich Hickey's 2009 JVM summit presentation explores these concepts of concurrency and time in a thought-provoking manner.

April 20, 2010

Манга о Лиспе

Для тех кто знает японский язык, недавно наткнулся на забавные учебники Лиспа в виде манга и интерактивный. Ну а для англоязычных есть прославленный комикс Casting SPELs in Lisp (автор которого написал целую иллюстрированною книгу по теме - будет опубликована этим летом издательским домом No Starch Press).

Lisp Manga

Many people will remember Conrad Barski's Casting SPELs in Lisp illustrated Lisp tutorial (Conrad's illustrated book, Land of LISP, is due out this summer). I just came upon a Lisp manga and interactive tutorial (both in Japanese) put together by a group of Japanese Lispers. Kawaii!

March 7, 2010

New native AMQP client for Common Lisp

I posted about Common Lisp messaging libraries before, so I'm happy to see that James Anderson has recently released de.setf.amqp, a native Common Lisp AMQP client. (thanks to Vitaly Mayatskikh for the heads-up).

March 1, 2010

Обновление библиотек

На прошлой неделе сделал релиз Parenscript 2.1 и Eager Future 0.4. В Parenscriptе добавилось неявное возвращение (как в Лиспе - те не надо всегда использовать return), возвращение множеств значений, и новое справочное руководство.

Paddy Mullen решил привести в приличный вид css-lite. На данный момент можно скачать здесь: http://github.com/paddymul/css-lite

Thomas de Grivel сделал форк uri-template: его cl-uri-templates дает возможность использовать операторы замены, которые были добавлены в третьем черновике документа URI Template. Сами операторы довольно плохо продуманы, и в добавок uri-template позволяет использовать обычные Лисповые формы в шаблонах, что намного проще.

February 28, 2010

Software updates

I released Parenscript 2.1 (Parenscript now has real version numbers!) this week. Check out the updated reference manual.

Eager Future also got a new release making it optionally eager (by popular demand).

Paddy Mullen decided to take over css-lite development. His fork includes new features and actual code examples. You can find it on github (ASDF-installable package coming soon): http://github.com/paddymul/css-lite

Thomas de Grivel forked uri-template: cl-uri-templates adds the substitution operators introduced in the third draft of the URI Template proposal. I think they are quite horrible and so have abstained from implementing them.

January 8, 2010

First Montreal JavaScript user's group meeting

The inaugural meeting of the Montreal JavaScript user's group will take place in two weeks, on January 21 (to coincide with CUSEC) at 7pm at the offices of Bloom Digital (481 Avenue Viger Ouest, Suite 202, right near Square-Victoria metro).

Laurent Villeneuve will present a talk about JavaScript domain-specific languages. You can follow announcements on Twitter (jsmontreal) and the mailing list.