Mike Schaeffer's Blog

Articles with tag: tech
August 18, 2008

Harry McCracken just wrote a bit comparing the price of PC's to Macintosh's. Like most of these guys, he misses the point. Consider his methodology: "I chose a standard [Apple] MacBook configuration...Then I configured laptops as similarly as possible from the country's two largest PC manufacturers". The problem is that this methodology takes the set of Apple machines to be the set of valid configurations for comparison, excluding configurations that Apple does not offer. Just for the sake of a more full comparison, what does a MacBook cost with these configurations?

  • A TrackPoint.
  • A numeric keypad.
  • Two internal batteries.
  • Two internal mouse buttons.
  • A swappable drive bay.
  • A docking station.
  • A calibrated, high-gamut display and a digitizer.
  • A display smaller than 13" or bigger than 17".
  • No keyboard.
  • A convertable tablet configuration.
  • Embedded on a PXI card.
  • The absolute highest performance.
  • The absolute minimum cost.

Of course, none of these configurations are available from Apple. If you need or want one of these options, you can't get it at any price from Apple. Similar comparisons can be made in the server and desktop PC spaces.

This is an unsuprising result. When you enlarge the playing field beyond Apple's relatively limited reach, it becomes even more apparant that these comparisons aren't 'Apple vs. PC' what they really are is 'Apple vs. The Entire Computer Industry'. Apple doesn't have the capability, desire, or brand to fare well in such a comparison: There are just too many market segments they don't address. Addressing all of these segments would leave them with a confusing product line, a highly taxed engineering group, and a muddled brand image.

Part of the value of the PC platform is that it not subject to the limitations of being confined to one highly image-sensitive company. Part of the value of the PC is that it allows other vendors to enlarge the platform into new segments. Missing out on this is one of the costs of picking an Apple that is missing in most comparisons, including McCracken's.

June 12, 2008

I was born in 1975. In the 'computer world', this means I grew up at the tail end of the 8-bit era. By the time I was a teenager the market was in the middle of deciding whether to go with PC's, the Apple Macintosh, or something else. Microsoft basically cinched that deal in 1990 with the release of Windows 3.0, the first relevant version. A PC running Windows 3.0 wasn't as nice as a Macintosh, but it didn't matter. If you already had a PC, you could buy Windows off the shelf for $89, retain all of your existing hardware and software, and then experiment with the GUI when you had the time. If typing win at a DOS prompt took you down the rabbit hole, clicking 'Exit Windows' took you right back to your comfort zone.

Windows 3.0 also had the benefit of a huge installed base of latent and mostly unused hardware. A typical business PC in 1990 might have been something like an 80286 with 2MB of RAM, a 40MB disk, and an EGA (640x350x4bpp) bitmapped display. It would then be running DOS software that basically couldn't address more than the first 640K of memory, and tf you ever saw the bitmap display in use, it was probably for a static plot of a graph. Compared to a Macintosh from the same year, a PC looked positively like something from a totally different generation. Windows 3.0 changed all this. It allowed you to switch your 80286 into 'Protected Mode' to get at that extra memory. It provided a graphics API (with drivers!) and forced programs to use the bitmapped display. It provided standard printer drivers that worked for all Windows programs. Basically, for $89 it took the hardware you already had and made it look almost like the Macintosh that would otherwise have cost you thousands of dollars. It was utterly transforming.

Almost 20 years later, the most interesting thing about this is the relative timing of the hardware and its software support. Most of the hardware in my 'typical 1990 PC' was introduced by IBM in its 1984 announcement of the IBM PC AT. The first attempt by IBM and Microsoft to support the 80286 natively came three years later in 1987's release of OS/2. The first 286-native platform to reach mainstream acceptance came in 1990. Think about that: it took 6 years for the open PC market to develop software capable of fully utilizing the 80286. The 80386 fared even worse; The first 386 machine was released in 1986, and it didn't have a major mainstream OS until either 1993 or 1995 (depending on whether or not you count Windows NT 3.1 as 'mainstream'). Thus, there were scores of 286 and 386 boxes that did nothing more than execute 8086 code really, really fast (for the time :-)). In modern terms, this is analogous to a vendor introducing a hardware devide today and then delaying software support until 2018.

This is emblematic of the hugely diminishing value of an open device platform in today's computer industry. In 1989, using a computer was largely an exercise in getting the damn thing to work. When those are the issues you're worried about as a PC user, an open platform is helpful because it enables a broader selection of vendors for parts and software. If you've run out of slots for both your video board and your bus mouse interface, you can always switch to an ATI video board with a built-in mouse port. If you need a memory manager that supports VCPI to enable your V86 multitasker, you can always switch to something like QEMM/386. If you need more memory to run your spreadsheet, you can go to AST Technolgies and buy a LIM/EMS board. When you're worried about these kinds of issues, issues 'low in the stack', the flexibility of choice provided by openness is useful enough that you might be more willing to bear the costs of a market slow to adopt new technologies.

Of course, price is also a factor. In 1988, Byte magazine ran a review of Compaq's Deskpro 386s. This was their first 80386SX machine, a desktop computer designed to be a cheaper way to run 80386-specific software. The cost of the review machine was something like $15,000. In 2007 dollars, this would buy you a nice, reasonably late-model BMW 3-series. A year later in 1989, my family bought a similar machine from ALR, which cost around $3,000. Thus isn't nearly as bad, but it's still around $5,200 in 2007, which basically means that a mid-range 1989 PC is priced at the very top end of the 2007 PC market. With monetary costs that high, that other benefit of openness, price competition, becomes a much bigger deal. Compaq ended up suffering badly as competition drove the price of the market to where it is today.

In the intervening 20 years, both of these circumstances have changed dramtically. PC's, both Windows and Macintosh, are well enough integrated that nothing needs to be done to get them to run aside from unpacking the box. NeXTStep, which in 1994 required a fancy $5,000 PC bought from a custom vendor to run well, will shortly be able to run (with long-range, high-speed wireless!) on a $200 handheld bought at your local shopping mall. Our industry has moved up Maslow's hierarchy of needs from expensive, unreliable hardware, run by the dedicated few to cheap, reliable hardware, run by disinterested many. We can now concentrate on more interesting things than just getting the computer to work, and tt is with this shift that the some of the unique value of openness has been lost. Unfortunately, the costs have been retained, there is no countervening force in the market that's forcing open platforms to move any faster.

Personally, I believe this bodes very well for Apple's latest attempt to own the smartphone space. There will only be one vendor and one price for the iPhone, but the platform will be able to move faster to adopt new technolgies, and integrate them more tightly, because there's only one kind of hardware to run on. The fewer hardware configurations and stricter quality control guidelines will make it easier (and more mandatory) that developers produce high quality software. The fact that entry into the software market is controlled, doesn't matter, because there are still more eligable developers than the platform actually needs. The net result of all this is that Apple, again, has a product that looks 'next generation', but the pricing and openness factors that cost them that advantage in the early 90's are no longer there. It's a good time to be involved in the iPhone, methinks.

April 10, 2008

A few months ago, I ran into a problem with a macro that seriously changed my opinions on how they should be used. It all comes down to the fact that macro are incorporated into compiler output. Two pieces of code that look nicely decoupled in the source text can end up very entwined with each other, once they are compiled.

To illustrate, I'll use the macro in question, something I once used to accept a sort of simulated 'multiple return value' in a dialect of Scheme. This is a low level example, something from my hobby work, but it can apply equally well to other uses of macros.

(defmacro (values-bind form vars . body)
  (with-gensyms (form-rv-sym)
    `(let ((,form-rv-sym ,form))
       (list-let ,vars (if (%values-tuple? ,form-rv-sym)
                           (slot-ref ,form-rv-sym 'v)
                           (list ,form-rv-sym))
         ,@body))))

This macro expands code like this:

(values-bind (returns-2-args 'foo) (arg-1 arg-2)
   (+ arg-1 arg-2))

Into code that looks like this:

(let ((#:form-rv-sym-69@00beeec4 (returns-2-args 'foo)))
   (list-let (arg-1 arg-2) (if (%values-tuple? #:form-rv-sym-69@00beeec4)
                              (slot-ref #:form-rv-sym-69@00beeec4 'v)
                              (list #:form-rv-sym-69@00beeec4))
     (+ arg-1 arg-2)))

And then, the compiler compiles that form and drops the result into the output file, which now contains several pretty deep assumptions about the simulated multiple value protocol it needs to honor:

  • Values are returned in a single value that satifies %values-tuple?.
  • Values are extracted from a tuple with a call to slot-ref for slot v.
  • Values are stored within slot as a list.

While the source text that uses values-bind doesn't need to know any of these details, the compiler output does. This results in compiler output that is very closely tied to the value protocol; Compiler output that is likely to be incompatible with any changes to that protocol.

In many development scenarios, this doesn't matter. Within a single project, if compiled file A comes to depend on assumptions embedded in macros from file B, it's less of an issue: both files are usually compiled at the same time. If both files can't be simultaneously compiled, things start to go wrong. I ran into this issue myself when trying to change the multiple value protocol I was using in my compiler. My core library was built with the old protocol, my new library was to be built with the new protocol, and the two could not interoperate for the brief period of time necessary to produce a compiled version of the new library. There are several possible approaches to solving this, but but one I took was the two step of building a new 'old' library that can handle both protocols, using it to compile a version that works only with the new protocol, and then switching over completely. It was a mess, and a mess I created myself with a macro that expanded into something that assumed way too much. The better approach, the approach that I switched to, is this:

(define (call-with-values proc vals)
  (apply proc (%values->list vals)))

(defmacro (values-bind form vars . body)
  `(call-with-values (lambda ,vars ,@body) ,form))

This expands the above code to something more palatable:

(call-with-values (lambda (arg-1 arg-2)
                     (+ arg-1 arg-2))
                  (returns-2-args 'foo))

The only assumption this makes in the compiled output is that there's a function call-with-values that calls its first argument with values passed in as its second argument. All of the gory details, which could easily be the same three from my list, are hidden behind function calls and dynamic linkage. This is actually the representation that made the two-step cutover approach plausible. Switching to this version of the values-bind macro removed assumptions about the value protocol from every call site, and made it easy to switch.

The upshot of this is something that's, I'm sure, pretty common knowledge in Lisp/Scheme circles: macros are best when limited to syntax, with the underlying functionality implemented in a more functional interface. The functional interface keeps things more decoupled, even when compiled, and leaves your software more managable. It also provides a second way to 'get at' the functionality provided by the underlying code. With the function/macro split, the macro expansionn can be avoided entirely, in the case when you already have a closure that contains the code you need to run.

One more brief example, a bit higher up the 'stack' in the language environment is the transformation of this macro:

(defmacro (with-output-to-string . code)
  (with-gensyms (saved-output-port-sym output-string-sym)
    `(let ((,saved-output-port-sym (current-output-port))
           (,output-string-sym (open-output-string)))
       (unwind-protect (lambda ()
                         (set-current-output-port ,output-string-sym)
                         ,@code
                         (get-output-string ,output-string-sym))
                       (lambda ()
                         (set-current-output-port ,saved-output-port-sym))))))

Into this macro/function pair:

(define (call-with-output-to-string fn)
  (let ((saved-output-port (current-output-port))
        (output-string (open-output-string)))
    (unwind-protect (lambda ()
                      (set-current-output-port output-string)
                      (fn)
                      (get-output-string output-string))
                    (lambda ()
                      (set-current-output-port saved-output-port)))))

(defmacro (with-output-to-string . code)
  `(call-with-output-to-string (lambda () ,@code)))
February 22, 2008

The instructions I gave earlier on Renaming SVN Users work only when the SVN repository is hosted on a machine that can run SVN hooks written in Unix style shell script. On a conventional Windows machine, one without Cygwin, MSYS, or similar, you have to switch to writing hooks in something like Windows batch language.

If all you want to do is temporarily rename users, then you can just create an empty file named pre-revprop-change.cmd in your repository under hooks\. The default return code from a batch file is success, which SVN interprets as a all revision property changes, all the time, by anybody. If you want to implement an actual policy, Philibert Perusse has posted a template script online.

Older Articles...