March 26, 2009

P-Cos Blew My Mind

This post contains my impressions about the rest of ILC 2009. I'll start off with something that really impressed me.

One of the things I have been thinking about lately is database schema evolution. The flip side of database schema changes are the corresponding changes in the client code they entail. Combining the two with live system upgrades can potentially give rise to inconsistencies. Even ignoring databases, loading code into a multithreaded Lisp image can result in undesired behaviour since the loading is non-atomic.

This problem was discussed on comp.lang.lisp several years ago, with the general consensus being that some sort of atomic loading mechanism was needed to address the problem.

However atomic loading alone does not eliminate inconsistencies for systems providing services via asynchronous protocols, such as web applications. Any users in the middle of a workflow consisting of a chain of requests to HTTP resources will be affected if any of those HTTP resources are changed.

Pascal Costanza came to the rescue by presenting (during a lightning talk) a solution utilizing context-oriented programming. Each set of updates was integrated into the system as a new layer. Atomicity is provided by the fact that layers can be (are?) activated atomically, and can be made completely independent of each other. By associating layers with web application sessions, Pascal was able to preserve consistency for those users in the middle of a session during the time of update.

In a fascinating discussion on protocol versioning on Jane Street's OCaml blog a while back someone advocated essentially the same approach to the problem by utilizing two servers running different versions of the application. Pascal's context-oriented approach not only eliminates the need for the extra hardware (one server for each version of the protocol), but also provides an elegant way to structure the code for multi-version protocols, and to apply other, orthogonal updates that would affect users of all protocols provided by the system, which would otherwise require backporting patches.

If database access is implemented as an abstraction layer that acts as a protocol, the same technique can be applied to schema changes as well.

From the above discussion it is easy to overlook a really neat fact, but Nick Levine's lightning talk came to the rescue: load and fasls make a really great patch distribution mechanism. You can view the code from his talk here.

Shriram Krishnamurthi gave a very nice talk about hooking kids on Scheme by letting disadvantaged inner-city middle-schoolers put lambdas in their lambdas so they can make videogames in their cellphones. Purely functional programming a-la How to Design Programs turns out to be a really great way to teach algebra. Shriram also dislikes the OLPC project and thinks Etoys is weak.

Gerald Sussman gave a talk that essentially argued against formal methods for systems design, and for extensible systems with flexible failure modes.

One of the key reasons why MIT shifted its introductory CS curriculum away from the SICP-style bottom-up abstractions building approach is that the behaviour of modern systems components is too complicated for it to be practical for them to either be well-specified, or modelled in a reductionist way. A lot of the time engineers have to resort to an empirical approach to discovering the actual semantics of a sub-component.

To manage this complexity, Sussman argued for open systems that permit a great degree of extensibility and continue to function under various failure modes. An example he used was an autopilot system: it should continue to work under unexpected conditions such as control surface failures. This echoes some of the ideas of Richard Gabriel about mob software. Although Gabriel was at the conference, the term itself never came up, but Richard did object to macros at the macros debate by mentioning that they would be a hindrance to constructing systems worked on by thousands of programmers.

Fail-fast behaviour of a whole system under unexpected circumstances is something that formal specifications cannot avoid. A great quote from Sussman was: "proofs considered harmful." What Sussman was arguing for, in his words, was high-quality abstract specifications, not low-quality well-specified ones.

Sussman presented several ideas about taking inspiration from biological systems, among them biological degeneracy, which can best be summed up as "doing different things to get the same result," and the fact that a relatively small number of genes are responsible for common physiological structures in organisms.

The bio-mimetic argument was echoed again in a lightning talk by someone whose name I unfortunately cannot recall or look up (as the full list of lightning talks has not yet been posted anywhere). One of the things mentioned was parallelism in multi-cellular systems - of course I had to jump in and point out that the best lesson we can take away from parallelism in the real world is a rejection of the concept of global state (more on that in an upcoming post).

Olin Shivers gave essentially the same talk about his system for expressing scoping and iteration in terms of graphs as he did at Danfest, and this time wasn't any more exciting. Someone actually fell asleep and started snoring loudly a few rows down from me. The discussion at the end was quite good though, one of the things Shivers emphasized that echoed Sussman's keynote was the fact that unspecified behaviour actually gave you more opportunity for expressive power.

I gave a lightning talk based on a blog post I made previously, arguing that the criteria for whether language X is a Lisp should be how well you can write a metacircular evaluator for that language. No one booed or threw things at me, and some very smart people made insightful comments, so I think it went ok. The question of static typing came up, and after the talk I had a big argument with a couple of people trying to explain what type systems are and what they do. Typed calculi need to be taught as part of an undergrad CS cirriculum - most people have a very skewed understanding of what type systems are and this lets language "designers" get away with doing a lot of stupid shit.


Yves said...

One of the things mentioned was parallelism in multi-cellular systems - of course I had to jump in and point out that the best lesson we can take away from parallelism in the real world is a rejection of the concept of global state (more on that in an upcoming post).

I completely agree, but I have to point out that the real world does have global state :)
I'm working on quantum computing and noticed that entanglement in a distributed system implies global state. (nb. implies global state in a classical system, with a programmable quantum computer it would be physics managing the global state ;)

Vladimir Sedach said...

That's an interesting interpretation. I thought about quantum systems when the idea about how weird "global state" really is started to occur to me, but it seemed that the no-communication theorem implies that you couldn't do anything useful with entanglement. Of course by "useful" I meant mutable and deterministic/serializable.

I suppose quantum computation is by itself "parallel" already, and if Deutsch is right and there are in fact multiple worlds doing the computations then it is actually very close to the idea of parallelism as we know it.

Still, the no-communication theorem applies. Unless action-at-a-distance is possible there is just no physical analog to mutable global state. The entire concept only works because our perception of the flow of time is fixed vis-a-vis the rest of the world, which happens to be just quick enough to execute consensus algorithms over large networks that we don't get annoyed.

Yves said...

The no-communication theorem indeed applies, but it says you cannot have instantaneous transfer of information. e.g. the classical example that you can teleport a quantum bit instantaneously, but it requires classical communication first.
There still are some interesting properties that can be used to do things simply impossible in a classical setting. The most popular algorithms today are the Quantum Key Distribution protocols. This uses the spooky action at a distance effects to create a one-time pad simultaneously between two parties. It's not useful as in global mutable state, but it could imply that god hacked in some global variables :D

n.b. Just to refine my earlier statement; the implication is that to correctly simulate quantum entanglement on a classical computer, you need global state. (you could hack something with actors which is basically the same)