Pierre R. Mai's Miscellany

The personal pages of Pierre R. Mai

Practical Lisp 2008

Permalink

Regarding Zach Beane’s query what people are working on in Common Lisp in 2008, here is the stuff I’m currently working on in CL, all of which is in use by industrial customers:

  • Bit-accurate Reference Interpreter for Simulink/TargetLink discrete-time models, used to validate full code-generation tool-chains, comprising code-generator, compiler, linker and target hardware.

    Implementations used are both SBCL and LispWorks on Windows and Mac OS X for Intel and Power PC, to ensure diversity of implementation and prevent common-cause errors.

    Most of the code is self-written including some internal support libraries, but the code base uses ASDF, CL-PPCRE, CL-YACC, CXML, Puri, SLIME, and trivial-gray-streams. Thanks to everyone who put so much work into those.

  • Framework for target-based testing of generated code, including test-harness generation, stimulus/result management and differencing. This is also used in the validation of code-generators, same as the Reference Interpreter above.

    Also uses both SBCL and LispWorks as implementations, and makes use of many of the same libraries.

  • Model-differencing and translation toolbox, with support for FIBEX, ASCET and UML-RT. Used in the maintenance and integration of inter-ECU models in automotive development.

    This tool is implemented using LispWorks on Windows, and employs CL-PPCRE and a modified early version of xmls. Note that the tool has been used in production usage for a couple of years, and is mostly in maintenance mode.

Well, there you have it, looking forward to hearing about other users/uses!

The MacBook Air and Me

Permalink

Ok, so I succumbed to the temptation of getting a MacBook Air, seeing as MacBook Pro updates are likely some way off, and/or are marginal improvements at best (especially on the 4GB front), and having a developer hardware discount expiring.

Having ranted about even MacBook Pros being RAM starved for my uses, and barely living within the constraints of my self-upgraded 160GB HD, why-oh-why would I want a MacBook Air? Well, for one because it’s sexy and like any self-respecting tech junkie, why wouldn’t I want one?

The main other reason is that my work-pattern has changed for the better in recent years, with less time spent at customer premises, and more time spent in-office, with a certain amount of traveling all over the country/continent/world for shortish durations thrown in. So the idea of having a heavy-duty laptop as main machine for ease of permanent relocation is getting somewhat less important to me, and the idea of going back to a non-portable main machine with a travel-friendly laptop is actually something that might become palatable.

Of course, there are still a number of open questions, especially when it comes to keeping stuff in-sync, especially to the point of being able to leave at fairly short notice without missing stuff on the road. A lot of stuff, like e-mail, address books, base-line source code, etc., is already stored and synced through centralized servers, but there is still lots of things that are not so synced, so I will have to see about that.

The next couple of weeks I’m going to try to live within the confines of the MacBook Air as a sort-of main machine, with only non-essential stuff living somewhere else.

The first impressions of the MBA to me are quite favorable: It is indeed very thin and stylish, very well made, and reasonably light, while remaining stiff and somewhat robust.

The new touch-pad gestures are very, very likable, making me miss a mouse much less than normally. The optical-drive borrowing works reliably (though a directly connected drive is really preferable for largish installations, especially over 802.11g and lower networks).

Performance is nothing to be ashamed of, though of course once you enter swapping-territory, having a 1.8” HD does not really help (D’oh)!

All in all, living within the MBA in the last couple of days proved surprisingly unrestrictive, and I can really see this being the one and only computer for quite a lot of folks. The only thing that I’m really missing is a Firewire 400/800 connector, since I really dislike USB 2.0 for mass-storage (especially on Apple platforms), and have standardized on firewire for all external storage. That said, most people will likely be just as happy with the USB 2.0 port.

We will see how things progress from here, once the mundane cruft accrues on the HD and sychronization issues crop up…

More on Aperture 2.0

Permalink

A nice overview of the new features and changes in Aperture 2.0 by the Aperture Users Professional Network, which seems to bear out some of my own first impressions. Particularly interesting might be the plug-in API, especially if it will attract some of the PS Plug-In writers, thus obviating the need for a round-trip to PS in many circumstances.

Fraser Speirs seems to agree on the quality improvements in the RAW conversions, and has nice sample images here.

FWIW, here are some examples showing High-ISO shadow performance on a picture taken with a D70s at 1600 ISO, converted with standard settings through Aperture 2.0, Lightroom 1.3 and Capture NX 1.3, in this order:

Demo6306 Aperture 2.0 Conversion CropDemo6306
Lightroom 1.3 Conversion CropDemo6306 Capture NX 1.3 Conversion Crop

The 100% crops show a nicely even and fine-grained noise background in the sky in the Aperture 2.0 conversions not matched by the LR or NX conversions (at least with standard settings). While not really visible in the crop above, low-contrast detail seems not to suffer as a side-effect of whatever Aperture is doing here, so this definitely seems a win for High-ISO work, like this hand-held shot at nightfall.

The crops above were all taken from the drainage pipe in the upper right of this overall picture of the Ammersee (to the west of Munich) at Nightfall:

Ammersee at
Nightfall

Taken with a Nikon D70s, Sigma 10-20mm f4-5.6 zoom at 10mm, f4 and 1/25s, ISO 1600, hand-held.

First Impressions: Aperture 2.0

Permalink

So Apple has shipped the long-awaited 2.0 of Aperture, its professional photo-everything app, coinciding with the Mac OS X 10.5.2 update that ushers in (again long-awaited) support for Nikon D3, D300 and Canon EOS 1Ds Mark III.

Having settled on Adobe Lightroom for my photography needs, but cursing its lack of multi-monitor support, and always interested in alternatives, I took a quick peek using the trial version, to see what has changed since Aperture 1.5.

My first impression is that speed has definitely improved since 1.5, especially for GPU-starved computers like my old-but-trusty PowerBook G4 17” 1.5GHz with 2GB RAM. That said, on this computer Lightroom is still a bit snappier for most operations, especially those that trigger a full RAW reconversion, but Aperture is definitely usable, and I’d imagine things to be much better on a more high-end GPU than the old Mobility Radeon 9700 in the PowerBook.

I also definitely like some of the UI-improvements, since to me the old Aperture UI felt a bit over-tweakable and somehow cluttered, whereas out-of-the-box Aperture 2 feels fairly approachable.

The new RAW rendering engine seems faster, and I quite like the output I have seen so far (based on random D70s samples): Especially high-ISO images seem to have less chroma-noise and a more fine-grained appearance to their luminance noise, without loss of detail, compared to Lightroom and Capture NX, yielding a high-ISO film look that looks fairly pleasing. Take this first impression with a ton of salt, though, since I have only looked at a couple of pictures in detail, all of them from my D70s.

There is of course lots more in the new version, compared to Aperture 1.5, and I’ll definitely take a closer look, especially since the price for Aperture 2.0 seems to have dropped again, now to only $199.

I’ll probably post more of my impressions as I get to play more with Aperture 2.0.

On the Mobile 4GB Memory Barrier

Permalink

Why does Intel not come out with mobile chipsets that can break out of the 4 GB memory barrier? I can see how this is not a commodity item, given that 64bit Windows is still not for the faint of heart, but surely there are enough leading edge types in the 64bit Windows, Linux and Mac OS X community that would make more than 32bit physical address support for the mobile platforms sensible?

With 1-2GB minimum RAM recommendations now for nearly all applications, there is really little headroom left with the common 3-4GB limit on the current crop of mobile chipsets. And RAM, or rather the lack of it, is still the number one reason for sluggish performance for most people. With Mac OS X on an old PowerBook G4 maxxed out with 2GB RAM (which it has had for 3 years time), I often seem to have 9GB or more of swap in use, with Mail, Safari, NNW, Pathfinder, Skype, a couple of utility apps, Word and Eclipse, OxygenXML and maybe an Emacs and Lisp thrown in for good measure.

So, pretty please with sugar on top, give me a mobile chipset with support for more than 4GB of physical memory, and a matching Apple MacBook Pro or whatever this year, please?

Fun With Fish-Eyes

Permalink

There is most certainly a lot of fun to be had with fish-eye lenses, especially when used to put the observer right into the middle of whatever is happening, something at which both ultra wide-angle lenses as well as fish-eyes excel.

The thing I particularly like about fish-eyes is that they can give a nice rounded-off feeling to the composition, drawing the corners of the image slightly in, like in the following example, which would have been less effective with a normal wide-angle lense.

Bells in Memmingen

This image was captured on a Nikon D70s with the Nikon 10.5mm f2.8 DX fish-eye lense in the pedestrian zone of Memmingen, Germany.

Kent M. Pitman: The Revised Maclisp Manual (the Pitmanual)

Permalink

Kent M. Pitman has restored and made available on the net The Revised Maclisp Manual (The Pitmanual), that gives great insights into MACLISP, which had a great influence, both directly, and indirectly via Zetalisp (the Lisp Machine Lisp), on the evolution of Common Lisp.

For those interested in such things, the Lisp Machine Manual (Chinual) is also a rich source of information on the MIT Lisp Machine Lisp, before it diverged into different dialects in the Symbolics and LMI commercial Lisp machines.

(Via Geoff Wozniak.)

Uses for Advice: Instrumentation in Event-driven Simulations

Permalink

Reading Gary King’s query for users of advice (see also What is advice?), I’d like to add that advice and friends have uses beyond the usual debugging and patching uses that normally come to mind:

The ability to dynamically add or remove instrumentation code at runtime can be quite useful in cases such as event-driven simulations: When you are modeling complex systems, where long-running simulations can see billions of events and sub-events (i.e. sequences of distinct steps taking place during one event), it is often vitally important to efficiently log and analyze only those events or sub-events that really matter to the purpose of this particular simulation run: The simple approach of logging everything under the sun, and only later deciding what to analyse is often impractical due to speed and space constraints.

So the better approach is to only log that which is necessary, however the difficulty lies in determining which events matter and which don’t, especially since for different simulation runs, the things that matter might be completely different, depending on which analysis the user wants performed.

The approach we took in our logistics simulation framework was to use advice, or rather a home-built instrumentation framework built on frobbing CMU CL’s fdefn-function directly, similar to Gerd Möllmann’s fwrappers code now present in CMU CL: Using this facility report developers could define which methods they wanted instrumented, and, having full access to all information the methods had, use this to record all the information they needed for a particular report.

The instrumentation code was packaged together with the code that analyzed the recorded information and produced the final output, and the simulation user could then switch on or off individual reports at runtime for a simulation run, and only the overhead needed for those reports was inflicted on that particular run.

So a simple analysis which hooks two methods, melde-einlagerung and melde-auslagerung might look like this (sorry for the german names):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
(define-analysis lager-umsatz-analysis
        ((laeger)
         (zugang-table :initform (make-hash-table :test #'eq))
         (abgang-table :initform (make-hash-table :test #'eq)))
      (:setup
       (setf laeger (find-subjekt-types 'Lagergruppe)))
      (:instrument
       (melde-einlagerung (element quelle gebinde)
         (when (typep element 'Lagergruppe)
           (incf (gethash element zugang-table 0) (gebinde-volumen gebinde)))
         (call-real-function element quelle gebinde))
       (melde-auslagerung (element quelle gebinde)
         (when (typep element 'Lagergruppe)
           (incf (gethash element abgang-table 0) (gebinde-volumen gebinde)))
         (call-real-function element quelle gebinde)))
      (:update
       (multiple-value-bind (sim-zeit-string real-zeit-string)
           (analysis-zeit-strings lager-umsatz-analysis
                                  (- (aktive-zeit)
                                     (analysis-interval lager-umsatz-analysis)))
         (loop
            for lager in laeger
            for bestand = (lagerelement-belegt lager)
            for kapazitaet = (lagerelement-kapazitaet lager)
            for zugang = (gethash lager zugang-table 0)
            for abgang = (gethash lager abgang-table 0)
            do
              (format t "~A;~A;~A;~,2F;~,2F;~,2F;~,2F~%"
                      sim-zeit-string real-zeit-string (subjekt-name lager)
                      (safe-prozent zugang bestand)
                      (safe-prozent abgang bestand)
                      (safe-prozent zugang kapazitaet)
                      (safe-prozent abgang kapazitaet))
            finally
              (clrhash zugang-table)
              (clrhash abgang-table)))))

Of course with Lisp there are also other solutions to this problem, like runtime-compilation of conditionally-instrumented methods, however using advice was a simple and effective solution to this particular problem, which also works without access to the source of methods to be hooked.