Mike Schaeffer's Blog

Articles with tag: tech
September 12, 2009

It took long enough, but finally, I've taken the time to set up a better workflow for this blog:

  • The master copy of the blog contents is no longer on the server. It's now on one of my personal machines.
  • I'm managing site history using git . This was a nice idea, but git and blosxom have a fundamental difference of opinion on the importance of file datestamps. blosxom relies on datestamps to assign dates to posts and git deliberately updates datestamps to work with build systems. There are ways to reconcile the two, but it's not worth the time right now.
  • Uploads to the server are done with rsync invoked through a makefile. (ssh's public key authentication makes this blazingly fast and easy.)

Maybe now, I'll finally get around to writing a little more. (Or, I could investigate incorporating Markdown, or the Baseline CSS Framework, or....)

August 5, 2009

I've been meaning to write this for months... after switching to an iPhone last October I have some thoughts on the transition away from Windows Mobile. Most of my detailed comments are complaints, so before I continue, it's worth saying that I do think the iPhone is the best smart phone you can buy. It is, by far, the best answer the industry has come up with for this class of device. That said, it's more fun (and potentially useful) to complain:

  • Touch Screen - I remember shopping with my parents for a car in the late 80's. One of the cars we looked at was a Buick Riveria with a touch screen in the center console. It was cool, but since it lacked tactile feedback, you had to be looking at it to use it. Flash forward 23 years, and you can replicate this experience in the palm of your hand, for better or for worse.

  • 'Ambient Information' - The phone does a poor job of making inforation ambiently available. To see your next appointment, you need to open the Calendar unless the reminder has already displayed. (This could go on the home page.) To be notified of a new e-mail, you need to unlock the phone and look at the home page. (This could be a LED on the case.) As notifications build up, they wind up truncated and incomplete, presumably so they can fit in an artificially small box on the screen.

  • Portrait/Landscape - I wake up in the morning and want to check e-mail before I get up.. I grab the phone off the nightstand, look at the display, and it... switches to landscape mode. I'm driving down the road and want to skip a track, so I grab the phone (eyes on the road), put my finger in the general area where the 'forward' button is, and it... switches to landscape mode. Landscape is useful when you need it, and a usability menace when you don't. There needs to be better control over when it engages and when it doesn't. (In this case, physical buttons for skipping forward and backward among tracks might be nice too... Buick ultimately dropped the touch screen entirely, and modern cars with navigation tend to also offer physical controls for key functions.)

  • e-Mail - I have two e-mail accounts set up on my phone: personal and business. It takes five taps to switch between them. A unified view would be nice. (A list of the union of all inboxes, color-coded by in-box). An easier way to pick an in-box would be almost as nice.

  • Large e-mails - By default, large e-mails are only partially downloaded to the phone and there's a button at the bottom of a one of these mails that lets you download the rest. Of course, once it does, it then zips you back to the top of the mail, so you have to manually scroll through the (remember, it's large) e-mail to get to where you were reading. Argh.

  • Latency - Maybe a 3GS would fix this, but the phone seems very slow to change modes and update the display. I find myself continually waiting split-seconds for the thing to animate the transition from one display to the next. I'm asking a lot here, but I don't care... I use the thing most waking hours of most days.

  • App Store Rejections - This is a problem, it sucks for app developers, and it won't matter to the success of the platform. The vast majority of customers will never hear that Apple censored a dictionary (!), and even if they did, it won't stop them from buying. In the short term, my guess is that Apple will make whatever minimal changes it needs to make to keep developers quiet enough, and the iPhone will continue to do very well. Phone buyers don't care enough about choice, and App developers will tend to always want to code for the platform where they have the best shot at making money, which is currently the iPhone. In the long term, my guess is that a lot more of this content will wind up on mobile web sites than through the store. After all, a website can be an icon on the home page, avoid the risk of Apple's rejecton, and also get to run on Android, Pre, and Windows Mobile.

  • App Store - 25,000 applications on the site, and I might look at 10 or 20 before deciding to make a purchase. The way the store presents applications (controlled by Apple) has a huge impact on which apps succeed and which apps fail. Even if the rejection problem magically goes away, Apple still controls the horizontal and the vertical. (A lot like Google's control over the fate of websites...)

  • Keyboard - After ten months, it's still tedious and error-prone for me. It works, but just. Apple should provide a keyboard layout that works like a Blackberry (or even T9) and trades off multiple letters per key in exchange for larger keys.

  • Industrial Design - I love the way the phone looks and feels, so I wrap it in a tacky add on case to 'protect it'. So does most everybody else. Last I heard, good design was about making a product that looks good and works well. The rampant sales of cases implies to me that something is missing with the 'works well' part of that equation.

I do like the thing, and I wouldn't switch away, but it's far from perfect. Let's hope it gets better.

April 16, 2009

I recently was asked why I like Lisp. For me, it boils down to the fact that Lisp makes it easy to control when your code is evaluated. Most languages only let you evaluate code at runtime. There are ways around this (C++ templates, cpp, code generation, etc.), but they all have severe limitations. In contrast, Lisp makes it easy to run actual Lisp code at compile time (macros) or even read time (reader macros). Combine that with the fact that Lisp code is pretty easy to manipulate with Lisp itself, and it becomes much easier to do things that are usually restricted to language developers, which can be quite a force multiplier. Just to illustrate, most of the language features that C# has over Java (LINQ, properties, closures, lambda syntax, yield return etc.) could be added to Java by 'ordinary developers' if Java somehow had these things I like so much about Lisp.

But it doesn't...

April 16, 2009

...the Linux Hater is back.... and OpenMoko is not.

I can't say that either of these things suprises me.

February 12, 2009

In the recent debate surronding the SOLID Principles of Object Oriented Design, the following two quotes stood out.

Last week I was listening to a podcast on Hanselminutes, with Robert Martin talking about the SOLID principles. ... And, when I was listening to them, they all sounded to me like extremely bureaucratic programming that came from the mind of somebody that has not written a lot of code, frankly.

Joel Spolsky

"Reading The Ferengi Programmer by Jeff Atwood really made me quite concerned. Here.s clearly an opinion which to me seems not grounded in sustained experience..."

Dhananjay Nene

Both of these are speculative slights on someone else's experience level, either generally or with a particular bit of technology. Bad rhetorical technique aside, my guess is that these are rooted in a fundamental lack of trust that the other side might actually have a well thought out reason for their point of view. This is an easy trap to fall into, particularly in a field as subjective as software design. Take the 'editor wars' as an example: which is better, Emacs, vi, or a full featured IDE? I don't know the answer to this question, but I do know that I can find people that will tell me I'm wrong for prefering Emacs. Change the debate to something a bit more relevant, something like the design of a large piece of software, and people get even more vitriolic.

At least part of the solution to this problem is plain, old trust. Think about a good developer that's moved into a lead role: it's easy to see how they might care enough about a particular design point to impose that on their team, either by implementing it themselves or by dictate. Where the trust comes in is in avoiding that trap. If I impose a choice on my team, I limit their ability to explore the design space themselves, take their own risks, and then potentially fail. I also limit their ability to correct my own misconceptions... if I think I'm right enough to mandate a design, I also probably think I'm right enough to ignore you and do my own thing anyway. Ironically enough, this makes the combination of conviction and risk aversion its own risk, and potentially a big one without a counterbalance. (From a personal level, if you go around imposing your will and/or ignoring points of view, you also lose the opporunity to learn from those around you.)

And this is where the bit about rhetorical technique comes into play. As satisfying as it can be to say that somebody you disagree with "...has not written a lot of code, frankly.", it's really beside the point. It doesn't matter, even were it true. What matters more to reasonable discussions about engineering technique are specific and testable statements: something like "Interface Segregation will help keep defect rates by promoting better unit tests." You may or may not agree with this statement, but it's more likely to lead to a relevant conversation than slights on experience or dogmatic declarations of opinion as fact. Several years ago I was told in no uncertain terms that I had made a design choice that 'wasn't scalable'. I ran some tests and came back with some numbers that showed my choice satisfied our requirements. Who do you think won that debate, the buzzword or the numbers? Specifics and testability can count for a lot. Dogma, not so much.

To be fair, most of the two blog posts I mention above are focused on meatier material than my quotes imply. I particularly liked Atwood's conclusion that "Rules, guidelines, and principles are gems of distilled experience that should be studied and respected. But they're never a substute [sic] for thinking critically about your work." For experienced developers, I expect that Nene would also agree. After all, he writes that "you have the experience on your side to generally make the right judgement calls and you are likely to anyway apply them under most of the cases." In a sense, both are arguing the same thing, namely that judgement ultimately drives the design process over strict adherence to a set of rules. The difference is that Nene takes it a few steps further and draws the conclusion that good developers produce good code, good code is SOLID, and Atwood's blog post is either useless or harmful. Maybe there are some valid points here, but they're obscured by a dogmatism that is more of a distraction than a productive way to think about software design.

November 6, 2008

In an era in which customers are almost begging Microsoft not to discontinue Windows XP, I was suprised to see a recent news story on the end of life of Windows for Workgroups 3.11 (WfWG). If you're not completely up on the early history of Windows, WfWG 3.11 was released in August of 1993, and was the last of the major US-market versions of Windows without native Win32 support out of the box. It was also one of a series of Windows releases in the early 90's that turned Windows from 'the library you need to run Excel' into a legitimate platform for general purpose computing.

From it's introduction in 1985 until the release of Windows 3.0 in 1990, Windows was almost entirely composed of the same basic core: DOS for file access and system startup, and a collection of three DLL's (KERNEL, GDI, and USER) for memory management, device independant graphics, and the GUI widget library and window manager. Atop the core sat programs written to the Windows API. All of this ran sharing the one 20-bit segmented address space provided by x86 real mode: with 640K usable memory. If you were lucky, you might have had a LIM/EMS board that allowed a few MB of extra memory to be addressed through a 64KB window at the top of the addres space. If you were really lucky, you might have had a 80386 computer with a special program that let it pretend its extra memory worked like a LIM/EMS board. Needless to say, memory was tight, difficult to use, and dangerous to share it between multiple programs.

The solution to this memory problem was initially to be OS/2. OS/2 was the operating system part of IBM's vast (and doomed) PS/2 program to recapture the PC space back from clone vendors. Like DOS, it was done in partnership with Microsoft, but IBM took a much more active role in the design and development of OS/2 than they did with DOS. OS/2's most noteworthy feature was the fact that it was designed to run in 80286 'protected mode' rather than the 'real mode' of DOS and Windows. Protected mode, like its name implies, added memory protection between processes that made multi-tasking more reliable. Protected mode also widened the physical address space of the CPU from 20-bits to 24-bits, making it possible to directly address 16MB of memory without resorting to tricks like LIM/EMS paging. This was all good, but it was tempered by the fact that OS/2 was expensive to run and didn't run DOS programs very well, thanks to its choice of 80286 protected mode over 80386. The only programs that could actually use the benefits of protected mode under OS/2 were OS/2-specific software that nobody had.

By the time 1988 rolled around, PC's with the capability of addressing more than 1MB of memory had been around since 1984, and there still wasn't a viable mainstream operating system that took advantage of this capability. This is when Windows got its big break: David Weise at Microsoft figured out how to run Windows itself in Protected Mode, along with unmodified Windows programs. Running existing software in protected mode was something of a holy grail, and Dr. Weise's idea ultimately resulted in Windows 3.0, released in 1990 to heady acclaim. Windows 3.0 also included the V86 multitasker from the older Windows/386 product. This meant Windows 3.0 could do things OS/2 could not do, like run multiple DOS programs at the same time and run them in graphical windows on the desktop.

Windows 3.0 ended up being a runaway sales success, and after its release, the rest of the dominos fell fairly quickly. Microsoft's partnership with IBM effectively ended, with IBM getting a source licence to Microsoft products through the early 1990's. IBM ultimately used this license to develop a special version of Windows they bundled with OS/2 2.0 to let Windows programs run under OS/2 ("a better Windows than Windows" went the ad). Microsoft's own 32-bit OS/2 2.0 got dropped, and the work done on OS/2 NT (3.0) ultimately formed the basis for 1993's Windows NT and the Win32 API. The next version of 16-bit Windows, Windows 3.1, dropped support for real mode entirely, and as it evolved into Windows 95, more and more system services were moved into 32-bit code. This 16/32-bit hybrid version of Windows lasted until Windows Me. It was definately barouque, and ended up notoriously unreliable, but its evolution from 256K 8088's to 128MB Pentiums is to my eye one of the more impressive examples of evolutionary software engineering. I don't miss using these versions of Windows, but it's easy to miss the 'brave new world' spirit they embodied.

August 18, 2008

Harry McCracken just wrote a bit comparing the price of PC's to Macintosh's. Like most of these guys, he misses the point. Consider his methodology: "I chose a standard [Apple] MacBook configuration...Then I configured laptops as similarly as possible from the country's two largest PC manufacturers". The problem is that this methodology takes the set of Apple machines to be the set of valid configurations for comparison, excluding configurations that Apple does not offer. Just for the sake of a more full comparison, what does a MacBook cost with these configurations?

  • A TrackPoint.
  • A numeric keypad.
  • Two internal batteries.
  • Two internal mouse buttons.
  • A swappable drive bay.
  • A docking station.
  • A calibrated, high-gamut display and a digitizer.
  • A display smaller than 13" or bigger than 17".
  • No keyboard.
  • A convertable tablet configuration.
  • Embedded on a PXI card.
  • The absolute highest performance.
  • The absolute minimum cost.

Of course, none of these configurations are available from Apple. If you need or want one of these options, you can't get it at any price from Apple. Similar comparisons can be made in the server and desktop PC spaces.

This is an unsuprising result. When you enlarge the playing field beyond Apple's relatively limited reach, it becomes even more apparant that these comparisons aren't 'Apple vs. PC' what they really are is 'Apple vs. The Entire Computer Industry'. Apple doesn't have the capability, desire, or brand to fare well in such a comparison: There are just too many market segments they don't address. Addressing all of these segments would leave them with a confusing product line, a highly taxed engineering group, and a muddled brand image.

Part of the value of the PC platform is that it not subject to the limitations of being confined to one highly image-sensitive company. Part of the value of the PC is that it allows other vendors to enlarge the platform into new segments. Missing out on this is one of the costs of picking an Apple that is missing in most comparisons, including McCracken's.

August 12, 2008

I have a new favorite blog, the Linux Hater's Blog. Some anonymous Linux user has taken it upon himself to open a blog dedicated to all of the many reasons why desktop Linux sucks (which it does). While it's more than a little mean-spirited, this blog is the dissenting voice of Linux. It is the conscience that, if heeded, will make the Linux desktop a better place to work.

For all of the problems with Linux, it is also the one major platform that allows the motivated individual or company to actually address those problems. The single biggest difference between the Linux Hater's Blog and the (would-be) Windows and MacOS X Hater's Blogs is that on the Linux blog, it's actually possible to do something about the problems. Consider this: 18 years ago, there was no Linux, 12 years ago, there was no Gnome. In 1990, the Linux Hater's blog would have one post: "It doesn't exist, go buy Windows." The reason I mention this is that while it's easy to dismiss the benefits of open source as purely theoretical (i.e.: "Have you ever needed to recompile your kernel?"), the benefits of open source are the entire reason it exists at all.

To look at this in a bit more depth, consider the gnome-panel as an example. Based on the copyright claims in the source code, gnome-panel is itself a collaboration of Eazel, Helix Code/Ximian/Novell, Sun Microsytems, Red Hat, The Free Software Foundation, Ian McKellar, James Wilcox, Rob Adams, Vincent Untz, and Carlos Garcia Campos. All of these contributors found things to change or fix, 'itches to scratch', and all of them changed or fixed the gnome-panel. This is something that basically cannot happen in the model of closed source software. If you want to change something in MacOS X, you basically have three options: try to convince Apple it is a worthwhile change by trying to present (giving up the rights to) a business case justifying the feature, try to go to work for Apple in the right group and convince them to let you implement your feature, or reimplement the entire thing yourself.

As a result of these kinds of trade offs, cross-organization collaboration in closed source is a lot harder to come by than in open source. Closed source essentially divides the stakeholders in a piece of software into two groups: those that can take responsibility for the softawre by making changes, and those that cannot and must either accept the changes as provided or work around them. In that sense, Free Software is the licencing model that brings to software the democratic ideals of personal responsibility and the sovereignty of the people. Like any democracy, in the short term it will have issues compared to more centralized forms of planning, but in the long term it will be a much more vibrant and productive place to be. This is also why the Linux Hater's Blog is so very important. To see why, continue the analogy with democracy a bit, and consider the process by which the United Stated adopted its constitution.

After the U.S. Constitutional Convention in Philadelphia, there came the long and highly political process of states ratifying new form of government. During this two year long debate, there were a series of papers, the Federalist Papers, written in support of the proposed Constutition. Less well known are the Anti-Federalist papers, a series of dissenting arguments against ratification. This dissent primarily centered around the lack of a Bill of Rights, and ultimatly led to the incorporation of a Bill of Rights as the first ten amendments to the constitution. The dissent was not just criticism: open process and free debate allowed it to be a key part of the construction of the Constitution.

This is a much grander version of what the Linux Hater's blog can do for Linux. By dissenting against the idea that Linux is already ready for the desktop (or the server), it also provides a list of weaknesses to fix.
Unlike a Windows Hater's Blog, the freedoms of Linux allow this list of weaknesses to effectively become a to do list for anyone or any company with the motivation and time to do the work. It is therefore not a liabilty to Linux, but an asset that derives its value from the freedom at the core of Free Software. Ironically enough, because of this, the 'Linux Hater' could easily turn out to be one of Linux's best friends.

June 12, 2008

I was born in 1975. In the 'computer world', this means I grew up at the tail end of the 8-bit era. By the time I was a teenager the market was in the middle of deciding whether to go with PC's, the Apple Macintosh, or something else. Microsoft basically cinched that deal in 1990 with the release of Windows 3.0, the first relevant version. A PC running Windows 3.0 wasn't as nice as a Macintosh, but it didn't matter. If you already had a PC, you could buy Windows off the shelf for $89, retain all of your existing hardware and software, and then experiment with the GUI when you had the time. If typing win at a DOS prompt took you down the rabbit hole, clicking 'Exit Windows' took you right back to your comfort zone.

Windows 3.0 also had the benefit of a huge installed base of latent and mostly unused hardware. A typical business PC in 1990 might have been something like an 80286 with 2MB of RAM, a 40MB disk, and an EGA (640x350x4bpp) bitmapped display. It would then be running DOS software that basically couldn't address more than the first 640K of memory, and tf you ever saw the bitmap display in use, it was probably for a static plot of a graph. Compared to a Macintosh from the same year, a PC looked positively like something from a totally different generation. Windows 3.0 changed all this. It allowed you to switch your 80286 into 'Protected Mode' to get at that extra memory. It provided a graphics API (with drivers!) and forced programs to use the bitmapped display. It provided standard printer drivers that worked for all Windows programs. Basically, for $89 it took the hardware you already had and made it look almost like the Macintosh that would otherwise have cost you thousands of dollars. It was utterly transforming.

Almost 20 years later, the most interesting thing about this is the relative timing of the hardware and its software support. Most of the hardware in my 'typical 1990 PC' was introduced by IBM in its 1984 announcement of the IBM PC AT. The first attempt by IBM and Microsoft to support the 80286 natively came three years later in 1987's release of OS/2. The first 286-native platform to reach mainstream acceptance came in 1990. Think about that: it took 6 years for the open PC market to develop software capable of fully utilizing the 80286. The 80386 fared even worse; The first 386 machine was released in 1986, and it didn't have a major mainstream OS until either 1993 or 1995 (depending on whether or not you count Windows NT 3.1 as 'mainstream'). Thus, there were scores of 286 and 386 boxes that did nothing more than execute 8086 code really, really fast (for the time :-)). In modern terms, this is analogous to a vendor introducing a hardware devide today and then delaying software support until 2018.

This is emblematic of the hugely diminishing value of an open device platform in today's computer industry. In 1989, using a computer was largely an exercise in getting the damn thing to work. When those are the issues you're worried about as a PC user, an open platform is helpful because it enables a broader selection of vendors for parts and software. If you've run out of slots for both your video board and your bus mouse interface, you can always switch to an ATI video board with a built-in mouse port. If you need a memory manager that supports VCPI to enable your V86 multitasker, you can always switch to something like QEMM/386. If you need more memory to run your spreadsheet, you can go to AST Technolgies and buy a LIM/EMS board. When you're worried about these kinds of issues, issues 'low in the stack', the flexibility of choice provided by openness is useful enough that you might be more willing to bear the costs of a market slow to adopt new technologies.

Of course, price is also a factor. In 1988, Byte magazine ran a review of Compaq's Deskpro 386s. This was their first 80386SX machine, a desktop computer designed to be a cheaper way to run 80386-specific software. The cost of the review machine was something like $15,000. In 2007 dollars, this would buy you a nice, reasonably late-model BMW 3-series. A year later in 1989, my family bought a similar machine from ALR, which cost around $3,000. Thus isn't nearly as bad, but it's still around $5,200 in 2007, which basically means that a mid-range 1989 PC is priced at the very top end of the 2007 PC market. With monetary costs that high, that other benefit of openness, price competition, becomes a much bigger deal. Compaq ended up suffering badly as competition drove the price of the market to where it is today.

In the intervening 20 years, both of these circumstances have changed dramtically. PC's, both Windows and Macintosh, are well enough integrated that nothing needs to be done to get them to run aside from unpacking the box. NeXTStep, which in 1994 required a fancy $5,000 PC bought from a custom vendor to run well, will shortly be able to run (with long-range, high-speed wireless!) on a $200 handheld bought at your local shopping mall. Our industry has moved up Maslow's hierarchy of needs from expensive, unreliable hardware, run by the dedicated few to cheap, reliable hardware, run by disinterested many. We can now concentrate on more interesting things than just getting the computer to work, and tt is with this shift that the some of the unique value of openness has been lost. Unfortunately, the costs have been retained, there is no countervening force in the market that's forcing open platforms to move any faster.

Personally, I believe this bodes very well for Apple's latest attempt to own the smartphone space. There will only be one vendor and one price for the iPhone, but the platform will be able to move faster to adopt new technolgies, and integrate them more tightly, because there's only one kind of hardware to run on. The fewer hardware configurations and stricter quality control guidelines will make it easier (and more mandatory) that developers produce high quality software. The fact that entry into the software market is controlled, doesn't matter, because there are still more eligable developers than the platform actually needs. The net result of all this is that Apple, again, has a product that looks 'next generation', but the pricing and openness factors that cost them that advantage in the early 90's are no longer there. It's a good time to be involved in the iPhone, methinks.

April 10, 2008

A few months ago, I ran into a problem with a macro that seriously changed my opinions on how they should be used. It all comes down to the fact that macro are incorporated into compiler output. Two pieces of code that look nicely decoupled in the source text can end up very entwined with each other, once they are compiled.

To illustrate, I'll use the macro in question, something I once used to accept a sort of simulated 'multiple return value' in a dialect of Scheme. This is a low level example, something from my hobby work, but it can apply equally well to other uses of macros.

(defmacro (values-bind form vars . body)
  (with-gensyms (form-rv-sym)
    `(let ((,form-rv-sym ,form))
       (list-let ,vars (if (%values-tuple? ,form-rv-sym)
                           (slot-ref ,form-rv-sym 'v)
                           (list ,form-rv-sym))

This macro expands code like this:

(values-bind (returns-2-args 'foo) (arg-1 arg-2)
   (+ arg-1 arg-2))

Into code that looks like this:

(let ((#:form-rv-sym-69@00beeec4 (returns-2-args 'foo)))
   (list-let (arg-1 arg-2) (if (%values-tuple? #:form-rv-sym-69@00beeec4)
                              (slot-ref #:form-rv-sym-69@00beeec4 'v)
                              (list #:form-rv-sym-69@00beeec4))
     (+ arg-1 arg-2)))

And then, the compiler compiles that form and drops the result into the output file, which now contains several pretty deep assumptions about the simulated multiple value protocol it needs to honor:

  • Values are returned in a single value that satifies %values-tuple?.
  • Values are extracted from a tuple with a call to slot-ref for slot v.
  • Values are stored within slot as a list.

While the source text that uses values-bind doesn't need to know any of these details, the compiler output does. This results in compiler output that is very closely tied to the value protocol; Compiler output that is likely to be incompatible with any changes to that protocol.

In many development scenarios, this doesn't matter. Within a single project, if compiled file A comes to depend on assumptions embedded in macros from file B, it's less of an issue: both files are usually compiled at the same time. If both files can't be simultaneously compiled, things start to go wrong. I ran into this issue myself when trying to change the multiple value protocol I was using in my compiler. My core library was built with the old protocol, my new library was to be built with the new protocol, and the two could not interoperate for the brief period of time necessary to produce a compiled version of the new library. There are several possible approaches to solving this, but but one I took was the two step of building a new 'old' library that can handle both protocols, using it to compile a version that works only with the new protocol, and then switching over completely. It was a mess, and a mess I created myself with a macro that expanded into something that assumed way too much. The better approach, the approach that I switched to, is this:

(define (call-with-values proc vals)
  (apply proc (%values->list vals)))

(defmacro (values-bind form vars . body)
  `(call-with-values (lambda ,vars ,@body) ,form))

This expands the above code to something more palatable:

(call-with-values (lambda (arg-1 arg-2)
                     (+ arg-1 arg-2))
                  (returns-2-args 'foo))

The only assumption this makes in the compiled output is that there's a function call-with-values that calls its first argument with values passed in as its second argument. All of the gory details, which could easily be the same three from my list, are hidden behind function calls and dynamic linkage. This is actually the representation that made the two-step cutover approach plausible. Switching to this version of the values-bind macro removed assumptions about the value protocol from every call site, and made it easy to switch.

The upshot of this is something that's, I'm sure, pretty common knowledge in Lisp/Scheme circles: macros are best when limited to syntax, with the underlying functionality implemented in a more functional interface. The functional interface keeps things more decoupled, even when compiled, and leaves your software more managable. It also provides a second way to 'get at' the functionality provided by the underlying code. With the function/macro split, the macro expansionn can be avoided entirely, in the case when you already have a closure that contains the code you need to run.

One more brief example, a bit higher up the 'stack' in the language environment is the transformation of this macro:

(defmacro (with-output-to-string . code)
  (with-gensyms (saved-output-port-sym output-string-sym)
    `(let ((,saved-output-port-sym (current-output-port))
           (,output-string-sym (open-output-string)))
       (unwind-protect (lambda ()
                         (set-current-output-port ,output-string-sym)
                         (get-output-string ,output-string-sym))
                       (lambda ()
                         (set-current-output-port ,saved-output-port-sym))))))

Into this macro/function pair:

(define (call-with-output-to-string fn)
  (let ((saved-output-port (current-output-port))
        (output-string (open-output-string)))
    (unwind-protect (lambda ()
                      (set-current-output-port output-string)
                      (get-output-string output-string))
                    (lambda ()
                      (set-current-output-port saved-output-port)))))

(defmacro (with-output-to-string . code)
  `(call-with-output-to-string (lambda () ,@code)))
Older Articles...