Mike Schaeffer's Blog

Articles with tag: tech
April 16, 2009

I recently was asked why I like Lisp. For me, it boils down to the fact that Lisp makes it easy to control when your code is evaluated. Most languages only let you evaluate code at runtime. There are ways around this (C++ templates, cpp, code generation, etc.), but they all have severe limitations. In contrast, Lisp makes it easy to run actual Lisp code at compile time (macros) or even read time (reader macros). Combine that with the fact that Lisp code is pretty easy to manipulate with Lisp itself, and it becomes much easier to do things that are usually restricted to language developers, which can be quite a force multiplier. Just to illustrate, most of the language features that C# has over Java (LINQ, properties, closures, lambda syntax, yield return etc.) could be added to Java by 'ordinary developers' if Java somehow had these things I like so much about Lisp.

But it doesn't...

February 12, 2009

In the recent debate surronding the SOLID Principles of Object Oriented Design, the following two quotes stood out.

Last week I was listening to a podcast on Hanselminutes, with Robert Martin talking about the SOLID principles. ... And, when I was listening to them, they all sounded to me like extremely bureaucratic programming that came from the mind of somebody that has not written a lot of code, frankly.

Joel Spolsky

"Reading The Ferengi Programmer by Jeff Atwood really made me quite concerned. Here.s clearly an opinion which to me seems not grounded in sustained experience..."

Dhananjay Nene

Both of these are speculative slights on someone else's experience level, either generally or with a particular bit of technology. Bad rhetorical technique aside, my guess is that these are rooted in a fundamental lack of trust that the other side might actually have a well thought out reason for their point of view. This is an easy trap to fall into, particularly in a field as subjective as software design. Take the 'editor wars' as an example: which is better, Emacs, vi, or a full featured IDE? I don't know the answer to this question, but I do know that I can find people that will tell me I'm wrong for prefering Emacs. Change the debate to something a bit more relevant, something like the design of a large piece of software, and people get even more vitriolic.

At least part of the solution to this problem is plain, old trust. Think about a good developer that's moved into a lead role: it's easy to see how they might care enough about a particular design point to impose that on their team, either by implementing it themselves or by dictate. Where the trust comes in is in avoiding that trap. If I impose a choice on my team, I limit their ability to explore the design space themselves, take their own risks, and then potentially fail. I also limit their ability to correct my own misconceptions... if I think I'm right enough to mandate a design, I also probably think I'm right enough to ignore you and do my own thing anyway. Ironically enough, this makes the combination of conviction and risk aversion its own risk, and potentially a big one without a counterbalance. (From a personal level, if you go around imposing your will and/or ignoring points of view, you also lose the opporunity to learn from those around you.)

And this is where the bit about rhetorical technique comes into play. As satisfying as it can be to say that somebody you disagree with "...has not written a lot of code, frankly.", it's really beside the point. It doesn't matter, even were it true. What matters more to reasonable discussions about engineering technique are specific and testable statements: something like "Interface Segregation will help keep defect rates by promoting better unit tests." You may or may not agree with this statement, but it's more likely to lead to a relevant conversation than slights on experience or dogmatic declarations of opinion as fact. Several years ago I was told in no uncertain terms that I had made a design choice that 'wasn't scalable'. I ran some tests and came back with some numbers that showed my choice satisfied our requirements. Who do you think won that debate, the buzzword or the numbers? Specifics and testability can count for a lot. Dogma, not so much.

To be fair, most of the two blog posts I mention above are focused on meatier material than my quotes imply. I particularly liked Atwood's conclusion that "Rules, guidelines, and principles are gems of distilled experience that should be studied and respected. But they're never a substute [sic] for thinking critically about your work." For experienced developers, I expect that Nene would also agree. After all, he writes that "you have the experience on your side to generally make the right judgement calls and you are likely to anyway apply them under most of the cases." In a sense, both are arguing the same thing, namely that judgement ultimately drives the design process over strict adherence to a set of rules. The difference is that Nene takes it a few steps further and draws the conclusion that good developers produce good code, good code is SOLID, and Atwood's blog post is either useless or harmful. Maybe there are some valid points here, but they're obscured by a dogmatism that is more of a distraction than a productive way to think about software design.

November 6, 2008

In an era in which customers are almost begging Microsoft not to discontinue Windows XP, I was suprised to see a recent news story on the end of life of Windows for Workgroups 3.11 (WfWG). If you're not completely up on the early history of Windows, WfWG 3.11 was released in August of 1993, and was the last of the major US-market versions of Windows without native Win32 support out of the box. It was also one of a series of Windows releases in the early 90's that turned Windows from 'the library you need to run Excel' into a legitimate platform for general purpose computing.

From it's introduction in 1985 until the release of Windows 3.0 in 1990, Windows was almost entirely composed of the same basic core: DOS for file access and system startup, and a collection of three DLL's (KERNEL, GDI, and USER) for memory management, device independant graphics, and the GUI widget library and window manager. Atop the core sat programs written to the Windows API. All of this ran sharing the one 20-bit segmented address space provided by x86 real mode: with 640K usable memory. If you were lucky, you might have had a LIM/EMS board that allowed a few MB of extra memory to be addressed through a 64KB window at the top of the addres space. If you were really lucky, you might have had a 80386 computer with a special program that let it pretend its extra memory worked like a LIM/EMS board. Needless to say, memory was tight, difficult to use, and dangerous to share it between multiple programs.

The solution to this memory problem was initially to be OS/2. OS/2 was the operating system part of IBM's vast (and doomed) PS/2 program to recapture the PC space back from clone vendors. Like DOS, it was done in partnership with Microsoft, but IBM took a much more active role in the design and development of OS/2 than they did with DOS. OS/2's most noteworthy feature was the fact that it was designed to run in 80286 'protected mode' rather than the 'real mode' of DOS and Windows. Protected mode, like its name implies, added memory protection between processes that made multi-tasking more reliable. Protected mode also widened the physical address space of the CPU from 20-bits to 24-bits, making it possible to directly address 16MB of memory without resorting to tricks like LIM/EMS paging. This was all good, but it was tempered by the fact that OS/2 was expensive to run and didn't run DOS programs very well, thanks to its choice of 80286 protected mode over 80386. The only programs that could actually use the benefits of protected mode under OS/2 were OS/2-specific software that nobody had.

By the time 1988 rolled around, PC's with the capability of addressing more than 1MB of memory had been around since 1984, and there still wasn't a viable mainstream operating system that took advantage of this capability. This is when Windows got its big break: David Weise at Microsoft figured out how to run Windows itself in Protected Mode, along with unmodified Windows programs. Running existing software in protected mode was something of a holy grail, and Dr. Weise's idea ultimately resulted in Windows 3.0, released in 1990 to heady acclaim. Windows 3.0 also included the V86 multitasker from the older Windows/386 product. This meant Windows 3.0 could do things OS/2 could not do, like run multiple DOS programs at the same time and run them in graphical windows on the desktop.

Windows 3.0 ended up being a runaway sales success, and after its release, the rest of the dominos fell fairly quickly. Microsoft's partnership with IBM effectively ended, with IBM getting a source licence to Microsoft products through the early 1990's. IBM ultimately used this license to develop a special version of Windows they bundled with OS/2 2.0 to let Windows programs run under OS/2 ("a better Windows than Windows" went the ad). Microsoft's own 32-bit OS/2 2.0 got dropped, and the work done on OS/2 NT (3.0) ultimately formed the basis for 1993's Windows NT and the Win32 API. The next version of 16-bit Windows, Windows 3.1, dropped support for real mode entirely, and as it evolved into Windows 95, more and more system services were moved into 32-bit code. This 16/32-bit hybrid version of Windows lasted until Windows Me. It was definately barouque, and ended up notoriously unreliable, but its evolution from 256K 8088's to 128MB Pentiums is to my eye one of the more impressive examples of evolutionary software engineering. I don't miss using these versions of Windows, but it's easy to miss the 'brave new world' spirit they embodied.

August 18, 2008

Harry McCracken just wrote a bit comparing the price of PC's to Macintosh's. Like most of these guys, he misses the point. Consider his methodology: "I chose a standard [Apple] MacBook configuration...Then I configured laptops as similarly as possible from the country's two largest PC manufacturers". The problem is that this methodology takes the set of Apple machines to be the set of valid configurations for comparison, excluding configurations that Apple does not offer. Just for the sake of a more full comparison, what does a MacBook cost with these configurations?

  • A TrackPoint.
  • A numeric keypad.
  • Two internal batteries.
  • Two internal mouse buttons.
  • A swappable drive bay.
  • A docking station.
  • A calibrated, high-gamut display and a digitizer.
  • A display smaller than 13" or bigger than 17".
  • No keyboard.
  • A convertable tablet configuration.
  • Embedded on a PXI card.
  • The absolute highest performance.
  • The absolute minimum cost.

Of course, none of these configurations are available from Apple. If you need or want one of these options, you can't get it at any price from Apple. Similar comparisons can be made in the server and desktop PC spaces.

This is an unsuprising result. When you enlarge the playing field beyond Apple's relatively limited reach, it becomes even more apparant that these comparisons aren't 'Apple vs. PC' what they really are is 'Apple vs. The Entire Computer Industry'. Apple doesn't have the capability, desire, or brand to fare well in such a comparison: There are just too many market segments they don't address. Addressing all of these segments would leave them with a confusing product line, a highly taxed engineering group, and a muddled brand image.

Part of the value of the PC platform is that it not subject to the limitations of being confined to one highly image-sensitive company. Part of the value of the PC is that it allows other vendors to enlarge the platform into new segments. Missing out on this is one of the costs of picking an Apple that is missing in most comparisons, including McCracken's.

Older Articles...