John Wiegley's ledger is a popular double-entry accounting system with a Unix command line interface.
What many people don't know is that version 3 of ledger was written in Common Lisp. This version was never made into an official release. In a FLOSS weekly podcast, Wiegley explains (31:00) that Common Lisp wasn't the best choice for ledger's users.
I emailed John to learn more about this. He replied that there were only two major problems: building cl-ledger was more difficult than building ledger, and that cl-ledger could no longer be scripted from the command line. In effect, the Common Lisp REPL had stolen the place of the Unix command line as ledger's interface.
cl-ledger was written in 2007, and there are now good solutions to these problems. ASDF works well as a build system, but before Quicklisp, dependency management for Common Lisp applications was difficult. Quicklisp solved the biggest outstanding problem in building Common Lisp applications. (PS - you can give Zach a Christmas gift for his work on Quicklisp)
Didier Verna's Command-Line Options Nuker is a widely portable Unix CLI library with many features that you can use to build command-line driven Common Lisp applications.
December 24, 2011
John Wiegley's ledger is a popular double-entry accounting system with a Unix command line interface.
December 22, 2011
The basis of wealth is exclusively material. Everything non-material in an economy is a social convention.
Money, trade, and even labor is worthless unless it satisfies a particular desire in a particular point in time (see Deleuze and Guattari's Anti-Oedipus about the role of desire in capitalism).
It's easy to accept that money is a social convention, but how can labor be worthless? The famous illustration is Bastiat's "broken window" fallacy. While Bastiat's conclusions are correct, his reasoning starts with the wrong assumptions. Wealth has nothing do with the classical notions of trade and utility and "money better spent elsewhere" - the broken window fallacy is directly explained by the material reality of objects.
The key thing to understand is that material objects are cumulative and impermanent. These two qualities are what drive everything else about wealth.
A window is cumulative in that it satisfies a desire and has a physical manifestation, and it is impermanent in that the physical manifestation is now broken and the desire is no longer satisfied. Even if currency were completely devalued tomorrow, the fact that a window is there would still satisfy the desire.
This is why the labor theory of value (indeed any other theory of value that doesn't take into account Deleuze and Guattari's desire machines as the ultimate driver of the economic process) is wrong. Labor or technology itself has no value whatsoever, unless it ultimately (if through a long series of immaterial social transactions) results in the production of a material object that satisfies a desire.
So far this essay has talked about labor, but what about trade? To understand trade, first we need to define the word: there is no such single thing as "trade," rather the word refers to two things, which may or may not be present in a particular "trade" (transaction): the social (buyer/seller, consignee/consignor relationship) and the material (embodied in labor as transportation of material objects). A purely social transaction would be finance, a purely material one would be theft.
Trade is obviously important in satisfying desire - if a material object is not in the right place at the right time, it can't satisfy that desire. A chain of social trades resulting in the transportation of a material object then obviously has value.
One of the most pressing questions today (see: SOPA act), is where does this leave purely social transactions? If you want $10 for a pile of bits, money that will buy you lunch, but someone else is satisfied with a "thanks for sharing!" for the same pile of bits, where does materialism come in?
The material reality of the world is that those bits are worthless. The movie, music, and publishing industries were built on material objects: selling time slots in seats in a movie theater, selling vinyl and plastic discs, selling bound stacks of paper. The particular content on those material objects was in a very fundamental way completely irrelevant to their business, even if paradoxically it was the key to their business model.
Knowledge may be cumulative, but it is worthless unless it can be applied to satisfy a desire. It is also permanent - it cannot be stolen. What knowledge is great at is helping produce better material objects with less cost and greater ability to satisfy desire.
The real competition to the movie, music and publishing industries are the computer manufacturers and ISPs.
What the MPAA and RIAA and the SAG are doing when they attempt to put in digital restrictions management into computer hardware and force ISPs to filter content is the equivalent of the Luddites burning water mills and power looms. This is a strategy that will ultimately fail, but in the short term causes a slow-down in the rate of improvement of material objects, both directly (PCs and Internet connections suck more because of attempts to implement digital restrictions management), and indirectly (this improvement in the production of objects is driven by knowledge produced with the aid of PCs and the Internet, in a cumulative process).
So what about the MPAA excuse that no one will be able to finance the production of big-budget action movies anymore? At a time when the very same progress in material production is drastically reducing the cost of producing a movie (via an all-digital process and computer-generated imagery), this is exactly like arguing that no one will be able to afford to author books during the time of Gutenberg's invention of the printing press.
The creative urge is a desire in and of itself. If there's anything you should take away from this essay, it's that people pay to have their desires satisfied.
December 20, 2011
- gotos make it possible to write bad programs
- threads make it possible to write bad programs
- global variables make it possible to write bad programs
- anonymous functions make it possible to write bad programs
- macros make it possible to write bad programs
- mutable variables make it possible to write bad programs
- continuations make it possible to write bad programs
- dynamic scoping makes it possible to write bad programs
- objects make it possible to write bad programs
- recursion makes it possible to write bad programs
Take this argument far enough, and you are left with the S-K combinators, and now it is impossible to write good programs.
Having few features in a programming language is a fault, not a virtue. The bigger fault lies in failing to provide the language with the facilities to be extended with new features.
No amount of language design can force a programmer to write clear programs.
--Guy Steele & Gerald Sussman
December 12, 2011
CLiki, the Common Lisp wiki, is a good resource for finding out about Common Lisp libraries and other information. However, the code behind CLiki itself is hard to maintain and add features to.
Andrey Moskvitin and I have been working on a replacement wiki web application on and off for the past eight months. The first public beta came out in the summer. Since then, I've worked the software to the point where I think it's ready as a replacement to power CLiki.
The second beta of CLiki2 is now up at http://18.104.22.168/. Please try it out and let me know what you think. Bugs can be reported at https://github.com/archimag/cliki2/issues or by sending me email: email@example.com
Most of the new features center around spam prevention:
- Wikipedia-style history lists and diffs for all pages
- Lists of edits by account/IP address
- Blacklist of accounts and IP addresses
- Atom change feeds for individual pages (as well as all of CLiki)
- HTML tag filtering
- Real article deletion and undeletion
- Code coloring using cl-colorize
- Working list of uncategorized/orphan articles
- Pages that work well in text browsers (and hopefully screen readers)
Behind the scenes, CLiki2 is powered by Hunchentoot, BKNR-datastore, and Nathan Froyd's diff library.
Source code is at https://github.com/vsedach/cliki2, and is licensed under the Affero GPL.
November 28, 2011
After a long hiatus, the Montreal Scheme/Lisp Users Group (MSLUG) is back to regular meetings, with the latest one taking place November 24.
Marc Feeley gave two talks, the first presenting his experiences adapting the Gambit Scheme implementation into the Gambit REPL Scheme interpreter app for iOS (most of the difficulties seemed to revolve around the Apple app store screening process). (slides)
Then Marc showed and discussed a demo doing distributed computing using mobile, serialized continuations bouncing around Gambit instances on x86 Macs and ARM-based iPhones/iPods. The slides are well worth checking out, particularly for insight into how serialization of problematic objects like I/O streams/ports is done.
There's at least one Common Lisp project (cl-walker) that claims to be able to serialize continuations, and another library (Storable Functions) which claims to be able to serialize closures (from which you can get continuations with a CPS transformer); I haven't tried either and to my knowledge there hasn't been any works done on mobile continuations for Common Lisp, or that much work for mobile code in CL in general.
The next MSLUG meeting is tentatively January 19th; I'm scheduled to present a talk on Parenscript.
November 5, 2011
Previously, this was implemented using the #+parenscript read-time conditional in the source files.
That worked ok if you loaded Parenscript before loading css-lite, but there were two problems:
- If you loaded the css-lite fasls compiled with Parenscript into a fresh Lisp image without loading Parenscript first, you'd get an error.
Both of these error stem from the fact that ASDF didn't know anything about the optional Parenscript dependency.
Didier Verna has written about optional ASDF dependencies previously (make sure to read the asdf-devel thread on optional dependencies Didier links to if you're interested in this). In short, relying on ASDF's :weakly-depends-on seems quite hairy.
I think I found a simple alternate solution for uri-template that seems to work: put all the Parenscript-dependent code into one file, and then use read-time conditionals in the uri-template.asd list of files like so:
:components ((:file "package")
#+parenscript (:file "parenscript-implementation")
You can see the full implementation in the latest patch to uri-template.
Let me know if you have any ideas about this technique, or optional dependencies in general.
October 30, 2011
Now that Conrad Barski's Land of Lisp (see my review on Slashdot) has come out, I definitely think Common Lisp is the best language for kids (or anyone else) to start learning computer programming.
Between Land of Lisp, David Touretzky's Common Lisp: A Gentle Introduction to Symbolic Computation (really great book for people new to programming, available for free) and The Little LISPer (3rd edition, editions four and up use Scheme) you have three really great resources to get started.
Lisp's syntax is a great advantage because it is so simple to learn and has so few special cases. The interactive, iterative development style and real late-binding means you can build programs in parts and add to them as you go. The presence of real metaprogramming means you always have the ability to look at any part of your program and its state to find out what it's doing/what's wrong. The HyperSpec and Common Lisp The Language are two of the best programming language reference manuals ever written.
The best parts about Common Lisp are that it's a language that's hard to outgrow and that it makes difficult things easy. One of the chapters in Land of Lisp explains HTTP and HTML and has you build a basic web server. That chapter is only 15 pages! There's tons of ideas in the language, and because you're not restricted to a particular programming paradigm, you're always discovering better ways of doing things and developing a personal style.
October 14, 2011
Jobs wanted people to love his products, take care to notice their craftsmanship, and be creative with them. They were supposed to help you make and do awesome things. But this love and attention to creativity was not extended to those involved in the manufacturing process.I've been meaning to write about this subject, and now seems a good time.
The iPad, the iPhone, and the Apple App Store are not leading to a new age of digital freedom and creativity. They are creating the real digital divide.
Original PCs used to ship with a Basic interpreter. When that stopped, you could still get a programming language implementation without too much trouble. But Apple goes out of its way to make the iPad and iPhone not programmable by anyone except a self-selected caste of "developers."
The awesome things you can do with the iPad have very real limits. Limits that are unnecessary, artificially imposed, and at core opposed to iPad's essence as a programmable computer.
Ellen Rose wrote about the infantilization of computer "users" in User Error: Resisting Computer Culture, but few products until the iPad have shown how literal this effect is. Consider Apple's marketing:
Does the above image remind you of anything?
Further evidence of how literal the infantilization has become is the infamous "fart app" - it is nothing but a direct throwback to the anal stage of Freud's model.
Richard Stallman made a poorly received comment on Jobs' legacy upon news of the latter's death. I think the negative consequences of the iPad extend well beyond Apple's hostile and exploitative stance towards Free Software
September 23, 2011
A substantial fraction of workers were absent on any given day, and those at work were often able to come and go... at their pleasure to eat or smoke... [the workplace] would have eating places, barbers, drink shops, and other facilities to serve the workers taking a break. Some mothers allegedly brought their children with them... Workers' relatives would bring food to them.A Farewell To Alms, Gregory Clark. p. 363
This is not about the Google complex, but a description of a 19th century Indian textile factory.
September 20, 2011
One Common Lisp feature that needs more publicity is case sensitivity. A common misconception is that Common Lisp is case insensitive, when in fact symbols in Common Lisp are case sensitive.
By default, the Common Lisp reader is case-converting: all unescaped characters in a symbol name get upper-cased. This gives the practical effect of making it seem as though symbol case doesn't matter. This is desirable behavior for interfacing with other case-insensitive languages (such as Fortran; from what I understand the main motivation for the default Common Lisp behavior), but a pain to interface with case-sensitive ones (such as C).
The behavior of the reader can be customized via readtable-case.
The one that might seem to be most useful for having case-sensitive symbols at first glance is :preserve, however remember that all code read in with the default setting (:upcase) is in upper-case, as are all the standard Common Lisp symbols (this is defined by the standard), so this means you will need to spell out all CL and external symbols IN ALL UPPERCASE. To make this less annoying, the :invert readtable-case is the most practical - all-lowercase symbol names become uppercase, all-uppercase become lowercase, and mixed-case stays mixed-case (the important part for case sensitivity). The Lisp printer outputs symbol names correctly this way by default. The only problem now becomes inconsistent spelling of a symbol in all lowercase or all uppercase in old code that expects case conversion. But otherwise you can get case sensitivity for your software by setting readtable-case to :invert today.
An easy way to manage the readtable-case is by using the named-readtables library. I've recommended named-readtables before; besides case sensitivity, it helps manage reader macros.
[This blog post is adapted from the case sensitivity CLiki FAQ entry I wrote. Feel free to make corrections and other suggestions on the CLiki page.]
September 13, 2011
August 28, 2011
We like to assume that people are basically competent and rational. Computer programmers enjoy pretending they are more competent and rational than people in other professions.
In many cases neither of those assumptions is true.
Two popular memes surrounding programming languages in the 80s and 90s were the assertions that "garbage collection is too slow" and that "dynamic typing doesn't work for large programs."
Many programmers were convinced both those things were true. In hindsight they were completely wrong, as languages such as Tcl (more on the importance of Tcl in the history of programming languages in an upcoming post), Perl, Java and Python "dragged people halfway to Lisp" and changed public perception.
How could so many people who consider themselves above-average in competence and rationality be so wrong? (Assume that every programmer is indeed a special snowflake and the Dunning-Kruger effect doesn't apply).
A hermit spent 10 years writing a program. 'My program can compute the motion of the stars on a 286-computer running MS DOS,' he proudly announced. 'Nobody owns a 286-computer or uses MS DOS anymore,' Fu-Tzu responded.
The problem is that programmers seem unable to think even a couple of years into the future. People complaining about garbage collection in the 80s were looking back at their existing 8-bit Trash-80s instead of at contemporary computers being produced and the future computers being planned. The idea that computers can be good at automating rote tasks like managing memory and checking and inferring types never occured to them.
People have trouble imagining the future even if the trends, such as Moore's law, are in front of them. It takes a very long time for people to understand the right ideas. Just ask Alan Kay. Being able to find the appropriate point of view really is better than a high IQ.
Here are some other Lisp concepts that programmers believe out of ignorance that will take a long time to dispel:
- tail recursion is unnecessary and makes debugging difficult
- macro-based metaprogramming results in unmaintainable programs
August 12, 2011
I predict one of the first things Google will do with self-driving cars is automate the trucking industry. This will be a huge change in terms of improving shipping efficiencies, but I don't believe that it will fundamentally disrupt the logistics industry: at most trucking companies, the turnover rate for drivers is over 100%. Truck drivers are already treated like robots.
The current passenger automobile system, on the other hand, will undergo a complete and total change.
Self-driving cars will lead to an almost complete elimination of both privately owned cars and public transportation in cities. Robot taxis will become so cheap and ubiquitous, and parking space so expensive, that it no longer makes sense to own your own car (many residents of NYC and San Francisco already find parking unaffordable today).
The first step to this is already being implemented - the automation of taxi dispatching.
Robot cars are safe. The auto insurance industry will be virtually eliminated.
Robots cars need less maintenance and will refuel themselves. Most gas stations and service shops will be consolidated into a few large service depots.
A city will have less automobiles but its citizens will use cars more often - even with peak demand, a smaller fleet of robot taxis than private vehicles can service commuter needs.
Self-driving cars have better road capacity utilization, so even with increased automobile usage the amount of paved roads in cities will be reduced, as unused lanes are reclaimed for real estate development. The same thing will happen to parking lanes and lots - even as demand for parking goes down, the price will rise as the parking spaces get reallocated to more profitable real estate development and the supply shrinks at a faster rate. This will have the effect of greatly reducing road maintenance expenses and increasing property tax income for city governments.
The layout of cities will return to the pre-automobile era, the most visible changes being narrower streets.
On the other hand, the highway system will face pressure to expand, as robot taxis will undoubtedly be used as a substitute for air, train, and bus travel. Robot taxi operators will be national or even international in scale, and who cares if a particular robot taxi was working in New York yesterday and is in Chicago today, as long as on average the operators' fleet utilization is maximized? The key ability of self-driving cars to link into aerodynamic paceline "trains" (much like bicycle racing teams) will make long-distance fuel consumption competitive with trains and buses.
The robot taxis by themselves will also be much more aerodynamic than today's cars. With the elimination of private ownership, automobile body design will no longer be driven by the status symbol desire, but by taxi operators' need to minimize fuel consumption.
What does this mean for public transit? Buses and street-level tramways will be out, but subway networks will likely remain viable because subways are not vulnerable to traffic jams and snow.
In terms of traffic, it's likely that the top speed of a journey will decrease, while the average speed increases. Robot cars can potentially negotiate intersections much more effectively than human drivers. Congestion at peak times will likely still be a problem in city centers due to decreased road capacity, but the traffic jams are likely to be shorter and involve higher average speeds.
I think the root of your mistake is saying that macros don't scale to larger groups. The real truth is that macros don't scale to stupider groups.--Paul Graham on ll1
People who design programming languages sometimes like to imagine an idealized "average programmer" who will employ their design. The underlying assumption being that the language designer is smarter than the "average programmer," and will set out to protect the latter from their own incompetence.
The arrogance behind this view is twofold - not only is the language designer deeming himself objectively smarter than other people, but that he will be able to predict how other people's stupidity will play out. In view of this egotism, the (lack of) quality of the end result should not be surprising.
This objection -- "but bad programmers will make a mess of it" -- is the stock objection everybody makes to every unorthodox programming construct. Since it is an objection to everything, it is an objection to nothing.--Daniel Gackle on programming language features
June 6, 2011
Many people still seem to regard continuations as a possible or even preferable method for writing web applications. This blog post aims to dispel that notion and demonstrate that continuation-based web apps belong in the 90s.
Why are continuations good?
Anton van Straaten's excellent Continuations Continued argues that continuations are a good way to model server-side code, ergo they are a good way to implement server-side code. The modeling assertion is for some instances correct, the implementation assertion is not.
Why are continuations bad for clients?
There are well-known problems with continuation-based web apps: bookmarks, history, and back/forward buttons don't work.
A web application session is a call-graph from the point of view of the browser, where the URLs are akin to procedures. HTTP interaction flows like a program, with the user making decisions of which procedure to invoke/URL to visit. By this analogy, using continuations is exactly like giving random names to all of the procedures in a program each time a procedure is called.
This is the core of the problem with continuation-based web applications. Everything that revolves around user control of accessing URLs (bookmarking, history, back/forward, etc.) breaks. This also makes it much harder to test continuation-based web applications programmatically and makes debugging harder.
One incidental advantage of this breakage is that some URLs do need to be unique and single-access to prevent cross-site attacks and duplicate form submissions. I argue that these mechanisms should be thought of as token-issuing state machines, and implemented explicitly. This leads to simpler code and manifest state.
Another strategy that has been used is storing continuations on the client side using cookies or URL query parameters. This approach is problematic for the amount of data it transmits on each request, and the possible security implications (the continuations need to be encrypted, and the keys frequently rotated and expired - however the expiry of continuations is exactly the problem that query-parameter serialized continuations were supposed to avoid - links that rely on continuations stored on the server cannot be bookmarked!).
Why are continuations bad for servers?
The essence of using continuations server-side is handing off control of inter-request state serialization to an implicit mechanism that is tied to the structure of application code.
Both data and logic are now intermingled and stored in opaque continuation structures. This makes the code hard to debug, state difficult to replicate for fail-over redundancy, problems difficult to reproduce, and control flow difficult to understand.
What should you do?
April 16, 2011
There is still some debate around whether programming qualifies as a creative endeavor akin to writing, arts, or crafts. Paul Graham attempts to draw analogies between hacking and painting (unconvincingly, some argue).
The answer is a strong positive if you examine the motivational factors (examining motivation to get better insights is something that I have emphasized before).
How else can you explain the motivational factors of people working on Free Software? Of people programming at work, and then going home and programming as a hobby? Of working on multiple, related and unrelated, projects simultaneously, sometimes over periods of years or decades at a time?
Another obvious but almost never discussed aspect of programming as a creative pursuit is that it is almost impossible to succeed in programming as a career if you do not enjoy your work. This is true for all creative professions, but can you argue the same for plumbers or assembly-line workers or, closer to the idiotic "knowledge worker" label, accountants?
Succeeding as a programmer of course has nothing at all to do with succeeding at being employed as a programmer, amusingly enough because of the widespread belief that programming is a non-creative profession and that 9 women can make 1 baby in 1 month. With perverse incentives such as "lines of code written" (when the only good thing about lines of code is how many you can remove) and no understanding by management of the impact of such things as technical debt, unit testing, or even basic things like quality, hapless code monkeys can stay on the payroll. But how many of them are recognized (in a positive way, mind you) by their peers? How many of them choose to continue to do programming into their 40s? The hapless code monkeys usually switch careers or "advance" themselves into the ultimate bastion of incompetence: management.
April 7, 2011
I've been thinking about mesh networks recently. Two of the better-known projects in this space are Open-Mesh and the Mesh Potato. Both use 802.11 to build a wireless network, but experience shows that wireless interference is a major problem in scaling these networks.
The solution seems obvious: add a HomePlug power line communications interface to the mesh routers. That way routers adjacent on the power grid (and so presumably adjacent to each other and other wireless devices, that is, in places with a lot of interference) can do forwarding over the wire.
If you add HomePlug interfaces to devices themselves (are there any PC or notebook power supplies with HomePlug built-in?), pervasive mesh networking starts to look like an inevitability.
March 29, 2011
Here's a bunch of links to interesting Common Lisp stuff I've come across lately:
EOS is a drop-in replacement for the FiveAM test framework which has no dependencies on external libraries (one of the dependencies of FiveAM is heavily implementation-specific and doesn't work on LispWorks 6, among other platforms). I've converted all my FiveAM projects to EOS, and recommend you do the same.
I've ported the Multilisp benchmarks from Marc Feeley's PhD dissertation to Eager Future2. Some of them tax your Lisp implementation a lot, and might reveal bugs. On my 32-bit x86 system, the only implementation to run all benchmarks to completion without segfaulting was SBCL. Try them on yours: load :test.eager-future2 and call benchmark.eager-future2:run-benchmarks.
Speaking of Marc Feeley, here is some other cool work he has been involved in:
Vincent St-Amour and Feeley came up with the PICOBIT R4RS Scheme, which can run Scheme programs in 256 bytes RAM and 8KiB of ROM total (including VM footprint) on PIC microcontrollers.
Feeley gave a talk about Gambit Scheme at ILC2010, and had a really great set of slides which you can get at http://www.iro.umontreal.ca/~gambit/Gambit-inside-out.pdf (warning: 24MB PDF file!)
FORMAT continues to amaze. Peter Seibel pointed out Erik Naggum's cool hack for formatting dates on the Lisp-pro mailing list. I recently learned that instead of passing nil or t or a stream to
FORMAT, you can pass a string with a fill-pointer, and "the output characters are added to the end of the string (as if by use of vector-push-extend)."
Jonathan Fischer wrote a good article about feeding binary data from Common Lisp to C.
Other things you can do with C is add s-expressions and macros - c-amplify does just that. c-amplify is motivated by the needs of game development, as was GOAL, Andy Gavin's Lisp system which powered the Jak & Daxter videogame franchise. Now there's a GPL-licensed attempt to create a GOAL-like environment called Score.
Other places besides C you can now run Lisp:
cl-gpu translates a subset of CL to CUDA GPU kernels. Arduino Lisp translates a subset of CL to a subset of C++ that can be compiled to run on the Arduino microcontrollers.
I keep promoting named-readtables (seriously, try it!), but Tobias C. Rittweiler has more cool libraries you should check out:
Hyperdoc provides a way for your Lisp library documentation have fancy Hyperspec-like lookup in SLIME.
Parse-Declarations is useful if you're building Lisp translators or code-walking tools.
There was a very interesting discussion of syntax and s-expressions on Hacker News; one of the things I learned is that according to X-bar linguistic theory, all natural languages basically consist of s-expressions. Of course the great thing about Common Lisp is, like natural languages, it is both homoiconic and its words have different meanings in different contexts. Alan Bawden came up with the "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo" let quine of the Lisp world (another one of the cool things I learned from Doug Hoyte's Let Over Lambda).
If you've been looking for example code of large CL web applications, you should check out cl-eshop. cl-eshop currently runs the «ЦиFры» St. Petersburg store chain website and is licensed under the AGPL (Affero GPL).
March 22, 2011
A common mistake programmers seem to make is assuming that abstraction is about putting a layer of indirection (whether through function calls or data structures) into a program. Indirection is one of the more commonly used tools to implement abstractions, but just because there is now an extra layer of function calls in your program doesn't make it more understandable, maintainable, or a better fit to the domain. Those are better (although somewhat subjective) criteria for what makes effective abstractions than layers of indirection.
Another thing programmers like to do is argue about programming languages and the abstractions (over machine language) they provide. Oftentimes you'll hear "language A has feature X" and the refrain "language B doesn't have feature X and is good enough, so feature X isn't important/is harmful." When iterated enough times, it's easy to see that this becomes a sort of reductio ad assembler argument.
Is a sufficiently powerful assembler good enough? So what makes C better than assembler? Why is C special? And what's better than C?
It is productive to be able to both ask and answer the last question, which is why real metaprogramming is an invaluable thing to have.
March 15, 2011
Freedom Zero is the freedom to run the program as you wish.
Freedom 1 is the freedom to study the source code, and change it so the program does your computing as you wish.
Freedom 2 is the freedom to help others; that's the freedom to make and distribute exact copies when you wish.
And Freedom 3 is the freedom to contribute to your community, which is the freedom to distribute copies of your modified versions when you wish.
An interview with Richard Stallman
February 15, 2011
Does anyone have an 8- or 16-way machine gathering dust? I need a many-processor box to do tests and tuning for Common Lisp future library implementations (I recently ported the Multilisp benchmarks from Marc Feeley's PhD dissertation to Common Lisp). Age or architecture (as long as it runs SBCL or ECL) not a concern. Will pay shipping. The box will live at Foulab, where it will be given a good home for software projects.
If you have such a system, or know someone who does, get in touch at vsedach at gmail.com
January 24, 2011
Let's walk through the American Society's of Agricultural and Biological Engineers submission guide to see why:
It is helpful, but not required, to prepare your manuscript using ASABE Manuscript Templates and to follow the Journal Manuscript Format. Please include line numbers and page numbers on each page (the templates will do this for you).
Do your own proofreading, editing, and even layout. Doesn't sound unreasonable.
Copyright Transfer Form
A complete Copyright Transfer Form must accompany your submitted journal manuscript. (Note: The Copyright Transfer Form replaces the earlier Manuscript Submission Form.) The manuscript will not be reviewed until the Copyright Transfer Form is received.
Now they expect to take my copyright away from me? Can they at least bother to do layout for themselves?
Please note that authors are required to pay, at publication, a page charge based on the number of published journal pages. The current charge is $100 ($110 for non-members) per 8.5 by 11 inch published page in Transactions of the ASABE and Applied Engineering in Agriculture, and $50 ($55 for non-members) per 6 by 9 inch published page in the Journal of Agricultural Safety and Health and Biological Engineering Transactions (formerly Biological Engineering). You will be advised of the total page charges when you receive the page proofs and billed when your article is published.
And now they expect me to pay for all this???
How much are libraries paying for subscriptions again? (The answer is here)
This journal is one particular example, but page charge fees levied on authors are not uncommon for scientific journals. Subscription costs to these journals for university libraries are astronomical.
I am not the first to have similar concerns.
Academic publishers have clearly become a scam industry. This is a market that is not only ripe for disruption, but by most reasonable standards should not exist (where do you think the money for subscription fees comes from? hint: probably your taxes).
What about the Public Library of Science? It's a good, non-profit scam. According to Wikipedia, "PLoS [author page charges] vary from $1,300 to $2,850." $2,000 to put your paper on the Internet. (note for those not aware: peer reviewers are volunteers that don't get paid, so there's no value-add costs that this $2,000 covers)
"But Vladimir," you say, "who is going to pay for publishing my
January 16, 2011
When making large deployments of commercial software, it is not uncommon for companies to force the suppliers to place the source code of that software into escrow. That might be an option if you're a large corporation, but for most users of SaaS/cloud computing services it is not, and not looking like one anytime soon.
There's nothing stopping your service provider from going bankrupt, being acquired, or deciding to shut down the service. This has already happened when SalesForce acquired SiteMasher (the latter was discontinued), and Twitter acquired DabbleDB - development and new signups ceased, but at least the current users have the comfort of knowing that "In the event we terminate the service, we will provide out customers with at least 60 days advance notice." And then what?
With shrink-wrapped software, you could continue running your existing version. Even if the discontinued software was tied to discontinued hardware, you could keep critical business functions running via judicious maintenance and spare parts suppliers (and later take advantage of emulation technology). This scenario is not uncommon, and of course entirely impossible for a cloud service.
This makes SaaS/cloud computing a big risk for basing your business on. Richard Stallman has previously criticized the lock-in risks of cloud computing, and there are also security concerns.
I think there's an overlooked strategy for mitigating this risk based around the Affero GPL. Releasing your service software as AGPL would eliminate this risk for customers, but unlike the GPL or other licenses it would keep your product protected - all competitors using your code would have to release their changes to the public (and to you). There's also the possibility of dual-licensing your code - an "Enterprise" version with the possibility of escrow for large customers, and an AGPL version with less features for smaller customers.
Why not simply provide an escrow clause into the contract with all customers? It won't help the smaller ones - they will certainly lack the knowledge and resources to go through your proprietary system and set it up on their Intranet. This is unlikely to be the case for Free Software that has a lot of users.
As more web services are created and shut down and their customers burned, the trend is likely to sway away from cloud computing. I think a Free Software strategy based around the AGPL is a way to avoid a "cloud computing winter."
Zmacs, MCL's Fred, and the Lispworks editor all contain a very nice time-saving feature absent from Emacs called mouse-copy. Rainer Joswig wrote a good description of mouse-copy; in brief it can be summarized as "hold down Ctrl and click on an s-exp to copy it to the current point."
I first found out about mouse copy from working with Jedi, JazzScheme's IDE, and ever since I've wanted it for my Emacs setup.
Michael Weber's redshank extensions to Paredit/SLIME include mouse-copy, but it depends on buffers being in Paredit mode.
Fortunately there's a simple way to get generic mouse-copy in Emacs. Unfortunately it doesn't do the right thing when it comes to spaces. This is easy to fix by borrowing a couple of lines from redshank. This is the mouse-copy I use right now, and it seems to work pretty well.
As you might suspect there's more time-saving things you can do with the mouse. For example, in redshank M-S-mouse-1 generates a skeleton for a make-instance when you click on a defclass definition. Looking at TI Explorer ZWEI sources, Lisp Machines had an entire modifier-key-and-mouse-button-chord convention that really made s-exp manipulation easy and direct. It would be nice to have something like that for Emacs.
If you have tips on better ways of using Emacs for editing Lisp, consider sharing them on CLiki's Emacs tips page.
January 10, 2011
The ALU wiki is running again. Despite requiring registration and having a captcha on every edit, it seems to have a worse spam problem than CLiki. Both seem to be being spammed by hand instead of by bots. Makes me wish the current SEO/content mill bubble would burst sooner.
One thing the ALU wiki does need is more up-to-date content. In particular, if you're a Lisp consultant or freelancer, please add yourself to the ALU wiki Lisp consultants directory.
LinkedIn gave me some ad credit to try out their ad platform, and I'm planning to run ads targeted at technologists and product managers to this landing page that strongly encourages them to try Common Lisp. Any ideas for how I can make it better?
One thing you might notice when perusing Ediware (what Luís Oliveira branded Edi Weitz's excellent Free Lisp Software) is the uniformly useful documentation right on the project webpage. What you may not realize is that Edi has written some software to help you write documentation like he does. DOCUMENTATION-TEMPLATE takes a package and generates HTML to describe the package's exported symbols (you are writing docstrings, right?).
Speaking of Ediware, one of the least appreciated of Edi's libraries is CL-INTERPOL. Besides regular expressions, it's also handy for things like HTML templating.
One of the highest-impact papers I've managed to overlook has come to my attention recently: Henry Baker's Metacircular Semantics for Common Lisp Special Forms. If you're working on Common Lisp compilers/translators/simulators it's a must-read (I wish I had noticed it sooner because a lot of the techniques are applicable to Parenscript). Techniques like that are also useful if you want to fake CL-style control flow mechanisms in other languages.
Perhaps the coolest such hack I've seen is Red Daly's implementation of the Common Lisp condition system for Parenscript. It's worth reading just for the fact that the implementation code explains how the condition system works better than the Hyperspec manages to.
Bonjure, the Montréal Clojure user's group, is having its next meeting January, Friday 21 at 17:30 at CRIM.
FunctionalJobs.com recently launched. They seem to have bootstrapped their first listings from Lispjobs; hopefully they'll have more Lisp-related jobs in the future.