Mike Schaeffer's Blog

Articles with tag: tech
January 12, 2012

Not too long ago, I wrote a bit on life with an iPhone 3G. Since then, Apple has revised the platform a few times, and I'verecently upgraded to the iPhone 4S. This makes now as good a time as any to revisit the points in my earlier post to see what has changed:

  • Touch Screen - The Apple touch screen is about as good as it gets. The size is a good balance between utility and portability, the hardware is well executed, and the software is very, very fluid. That said, there's still the problem that touch screens eliminate the tactile feedback you get from physical buttons. It's harder to use the phone when your eyes aren't visually focused on the display. This limitation is innate to touch screens, but it's still annoying.

  • 'Ambient Information' - iOS 5 handles notifications much more nicely than in earlier versions of iOS. However, the homescreen is still largely dead to ambient information. The only two exceptions are the numeric badges attached to icons and the calendar icon (which displays the current date). The clock icon is wrong, the weather is wrong, and the map is wrong. My hunch is that this is partially to save on battery life, but given that the iPod nano can keep an analog clock icon current, some of this limitation seems gratuitous.

  • Inconvenient Portrait/Landscape Switching - Fixed with a nice lock facility in the task switcher. (Although I rarely use the lock, so maybe it wasn't a big problem after all.)

  • Multiple e-Mail boxes - Fixed in iOS 4 with the unified mailbox view.

  • Large e-mails - I'm not honestly sure if this has been fixed, or if I just get fewer large e-mails, but I haven't noticed this nearly as much.

  • Latency - The iPhone 4S is almost completely beyond reproach. (The latency on my old 3G got terrible, with the upgrade to iOS 4.0, and the subseuent patches did nothing to correct it.)

  • App Store Rejections - There seems to have been less public drama lately around App Store rejections and policy changes. However, I'm suspecting it's mainly because Apple has given a little and developers have come to grudgingly accept the limitations that Apple still imposes.

  • App Store - It's grown to 500,000 (!) applications, but Apple still controls the horizontal and the vertical. Because they control the way applications are displayed, they have a huge degree of control over the exposure their ISV's get and the revenues those ISV's earn.

  • Keyboard - After over three years, it's still tedious and error-prone. It works, but just. What's changed in my thinking over the last couple years is that I no longer care. For me, the iPhone is almost entirely about content consumption, and the keyboard doesn't really matter that much.

  • Industrial Design - I still love the way the phone looks and feels. What's different for me is that I no longer bother with the add on case.

A couple years ago, this is where I said I wouldn't switch away from an iPhone. I recently replaced one iPhone with another, so for me, this is still mostly true; The iPhone has evolved nicely over the years, and it still fits my needs better than the alternatives. However, two things have changed in the last few years. The first is that there's now a reasonable competitor. Unlike then, the alternative to iOS isn't Windows Mobile 6.5... the modern alternative, Android, has a touch screen, a modern web browser, and a fully stocked app store. Unless Apple sues Android into submission, it has lost these things as competitive differentiators.

The second thing that's changed for me in the last couple years is more personal. As much as I like the iPhone, I can't shake the feeling that it isn't a net improvement to my overall standard of living. Amy Breesman said it well when she was recently quoted in an NPR Story: " I would almost say it's, like, a negative effect that it's had on my life. It's just kind of this rabbit hole that you're always going down.". Maybe I'd miss it more if it were gone, but I can't shake the feeling that the time spent on the phone would be better spent elsewhere. Then again, I wouldn't have known about that NPR quotation, unless I had heard it on the NPR app in my phone.

June 29, 2011

One typical property of Lisp systems is that they they intern symbols. Roughly speaking, when symbols are interned, two symbols with the same print name will also have the same identity. This design choice has several significant implications elsewhere in the Lisp implementation. It is also one of the places where Clojure differs from Lisp tradition.

In code, the most basic version of the intern algorithm is easy to express:

(define (intern! symbol-name)
   (unless (hash-has? *symbol-table* symbol-name)
      (hash-set! *symbol-table* symbol-name (make-symbol symbol-name)))
   (hash-ref *symbol-table* symbol-name))

This code returns the symbol with the given name in the global symbol table. If there's not already a symbol under that name in the global table, it creates a symbol with that name and stores it in the hash prior to returning it.. This ensures that make-symbol is only called once for each symbol-name, and the symbol stored in *symbol-table* is always the symbol returned for a given name. Any two calls to intern! with the same name are therefore guaranteed to return the exact, same, eq? symbol object. At a vCalc REPL, this looks like so (The fact that both symbols are printed with ##0 implies that they have the same identity.):

user> (intern! "test-symbol")
; ##0 = test-symbol
user> (intern! "test-symbol")
; ##0 = test-symbol

This design has several properties that have historially been useful when implementing a Lisp. First, by sharing the internal representation of symbols with the same print name, interning can reduce memory consumption. A careful programmer can write an implementation of interned symbols that doesn't allocate any memory on the heap unless it sees a new, distinct symbol. Interning also gives a (theoretically) cheaper mechanism for comparing two symbols for equality. Enforcing symbol identity equality for symbol name equality implies that symbol name equality can be reduced to a single machine instruction. In the early days of Lisp, these were very significant advantages. With modern hardware, they are less important. However, the semantics of interned symbols do still differ in important ways.

One example of this is that interned symbols make it easy to provide a global environment 'for free'. To see what I mean by this, here is the vCalc declaration of a symbol:

struct
{
     ...
     lref_t vcell;  // Current Global Variable Binding
     ...
} symbol;

Each symbol carries with it three fields that are specific to each symbol, and are created and initialized at the time the symbol is created. Because vcell for the symbol is created at the same time as the symbol, the global variable named by the symbol is created at the same time as the symbol itself. Accessing the value of that global variable is done through a field stored at an offset relative to the beginning of the symbol. These benefits also accrue to property lists, as they can also be stored in a field of a symbol. This is a cheap implementation strategy for global variables and property lists, but it comes at the cost of imposing a tight coupling between two distinct concepts: symbols and the global environment.

The upside of this coupling is that it encourages the use of global symbol attributes (bindings and properties). During interactive programming at a REPL, global bindings turn out to be useful because they make it easy to 'say the name' of the bindings to the environment. For bindingsthat directly map to symbols, the symbol itself is sufficient to name the binding and use it during debugging. Consider this definition:

(define *current-counter-value* 0)

(define (next-counter-value)
   (incr! *current-counter-value*)
   *current-counter-value*)

This definition of next-counter-value makes it easy to inspect the current counter value. It's stored in a global variable binding, so it can be inspected and modified during debugging using its name: *current-counter-value*. A more modular style of programming would store the current counter value in a binding local to the definition of next-counter-value:

(let ((current-counter-value 0))
  (define (next-counter-value)
    (incr! current-counter-value)
    current-counter-value))

This is 'better' from a stylistic point of view, because it reduces the scope of the binding of current-counter-value entirely to the scope of the next-counter-value function. This eliminates the chance that 'somebody else' will break encapsulation and modify the counter value in a harmful fashion. Unfortunately, 'somebody else' also includes the REPL itself. The 'better' design imposes the cost that it's no longer as easy to inspect or modify the current-counter-value from the REPL. (Doing so requires the ability to inspect or name the local bindings captured in the next-counter-value closure.)

The tight coupling between interned symbols and global variable bindings should not come as a suprise, because interning a symbol necessarily makes the symbol itself global. In a Lisp that interns symbols, the following code fragment creates two distinct local variable bindings, despite the fact that the bindings are named by the same, eq? symbol: local-variable.

(let ((local-variable 0))
   (let ((local-variable 0))
      local-variable))

The mismatch between globally interned symbols and local bindings implies that symbols cannot as directly be involved in talking about local bindings. A Common Lisp type declaration is an S-expression that says something about the variable named by a symbol.

(declare (fixnum el))

In contrast, a Clojure type declaration is a reader expression that attaches metadata to the symbol itself:

^String x

The ^ syntax in Clojure gathers up metadata and then applies it using withMeta to the next expression in the input stream. In the case of a type declaration, the metadata gets applied to the symbol naming the binding. This can be done in one of two ways. The first is to destructively update metadata attached to an interned symbol. If Clojure had done this, then each occurrance of symbol metadata would overwrite whatever metadata was there before, and that one copy of the metadata would apply to every occurance of the symbol in the source text. Every variable with the same name would have to have the same type declarations.

Clojure took the other approach, and avoids the problem by not interning symbols. This allows metadata to be bound to a symbol locally. In the Clojure equivalent of the local-variable example, there are two local variables and each are named by two distinct symbols (but with the same name).

(let [local-variable 0]
   (let [local-variable 0]
      local-variable))

The flexibility of this approch is useful, but it comes at the cost of losing the ability to store values in the symbols themselves. Global symbol property lists in Lisp have to be modeled using some other means. Global variable bindings are not stored in symbols, but rather in Vars. (This is something that compiled Lisps tend to do anyway.) These changes result in symbols in Clojure becoming slightly 'smaller' than in Lisp, and more well aligned with how they are used in moodern, lexically scoped Lisps. Because there are still global variable bindings availble in the language, the naming benefits of globals are still available for use in the REPL. It's taken a while for me to get there, but the overall effect of un-interned symbols on the design of a Lisp seems generally positive.

September 12, 2009

It took long enough, but finally, I've taken the time to set up a better workflow for this blog:

  • The master copy of the blog contents is no longer on the server. It's now on one of my personal machines.
  • I'm managing site history using git . This was a nice idea, but git and blosxom have a fundamental difference of opinion on the importance of file datestamps. blosxom relies on datestamps to assign dates to posts and git deliberately updates datestamps to work with build systems. There are ways to reconcile the two, but it's not worth the time right now.
  • Uploads to the server are done with rsync invoked through a makefile. (ssh's public key authentication makes this blazingly fast and easy.)

Maybe now, I'll finally get around to writing a little more. (Or, I could investigate incorporating Markdown, or the Baseline CSS Framework, or....)

August 5, 2009

I've been meaning to write this for months... after switching to an iPhone last October I have some thoughts on the transition away from Windows Mobile. Most of my detailed comments are complaints, so before I continue, it's worth saying that I do think the iPhone is the best smart phone you can buy. It is, by far, the best answer the industry has come up with for this class of device. That said, it's more fun (and potentially useful) to complain:

  • Touch Screen - I remember shopping with my parents for a car in the late 80's. One of the cars we looked at was a Buick Riveria with a touch screen in the center console. It was cool, but since it lacked tactile feedback, you had to be looking at it to use it. Flash forward 23 years, and you can replicate this experience in the palm of your hand, for better or for worse.

  • 'Ambient Information' - The phone does a poor job of making inforation ambiently available. To see your next appointment, you need to open the Calendar unless the reminder has already displayed. (This could go on the home page.) To be notified of a new e-mail, you need to unlock the phone and look at the home page. (This could be a LED on the case.) As notifications build up, they wind up truncated and incomplete, presumably so they can fit in an artificially small box on the screen.

  • Portrait/Landscape - I wake up in the morning and want to check e-mail before I get up.. I grab the phone off the nightstand, look at the display, and it... switches to landscape mode. I'm driving down the road and want to skip a track, so I grab the phone (eyes on the road), put my finger in the general area where the 'forward' button is, and it... switches to landscape mode. Landscape is useful when you need it, and a usability menace when you don't. There needs to be better control over when it engages and when it doesn't. (In this case, physical buttons for skipping forward and backward among tracks might be nice too... Buick ultimately dropped the touch screen entirely, and modern cars with navigation tend to also offer physical controls for key functions.)

  • e-Mail - I have two e-mail accounts set up on my phone: personal and business. It takes five taps to switch between them. A unified view would be nice. (A list of the union of all inboxes, color-coded by in-box). An easier way to pick an in-box would be almost as nice.

  • Large e-mails - By default, large e-mails are only partially downloaded to the phone and there's a button at the bottom of a one of these mails that lets you download the rest. Of course, once it does, it then zips you back to the top of the mail, so you have to manually scroll through the (remember, it's large) e-mail to get to where you were reading. Argh.

  • Latency - Maybe a 3GS would fix this, but the phone seems very slow to change modes and update the display. I find myself continually waiting split-seconds for the thing to animate the transition from one display to the next. I'm asking a lot here, but I don't care... I use the thing most waking hours of most days.

  • App Store Rejections - This is a problem, it sucks for app developers, and it won't matter to the success of the platform. The vast majority of customers will never hear that Apple censored a dictionary (!), and even if they did, it won't stop them from buying. In the short term, my guess is that Apple will make whatever minimal changes it needs to make to keep developers quiet enough, and the iPhone will continue to do very well. Phone buyers don't care enough about choice, and App developers will tend to always want to code for the platform where they have the best shot at making money, which is currently the iPhone. In the long term, my guess is that a lot more of this content will wind up on mobile web sites than through the store. After all, a website can be an icon on the home page, avoid the risk of Apple's rejecton, and also get to run on Android, Pre, and Windows Mobile.

  • App Store - 25,000 applications on the site, and I might look at 10 or 20 before deciding to make a purchase. The way the store presents applications (controlled by Apple) has a huge impact on which apps succeed and which apps fail. Even if the rejection problem magically goes away, Apple still controls the horizontal and the vertical. (A lot like Google's control over the fate of websites...)

  • Keyboard - After ten months, it's still tedious and error-prone for me. It works, but just. Apple should provide a keyboard layout that works like a Blackberry (or even T9) and trades off multiple letters per key in exchange for larger keys.

  • Industrial Design - I love the way the phone looks and feels, so I wrap it in a tacky add on case to 'protect it'. So does most everybody else. Last I heard, good design was about making a product that looks good and works well. The rampant sales of cases implies to me that something is missing with the 'works well' part of that equation.

I do like the thing, and I wouldn't switch away, but it's far from perfect. Let's hope it gets better.

April 16, 2009

I recently was asked why I like Lisp. For me, it boils down to the fact that Lisp makes it easy to control when your code is evaluated. Most languages only let you evaluate code at runtime. There are ways around this (C++ templates, cpp, code generation, etc.), but they all have severe limitations. In contrast, Lisp makes it easy to run actual Lisp code at compile time (macros) or even read time (reader macros). Combine that with the fact that Lisp code is pretty easy to manipulate with Lisp itself, and it becomes much easier to do things that are usually restricted to language developers, which can be quite a force multiplier. Just to illustrate, most of the language features that C# has over Java (LINQ, properties, closures, lambda syntax, yield return etc.) could be added to Java by 'ordinary developers' if Java somehow had these things I like so much about Lisp.

But it doesn't...

April 16, 2009

...the Linux Hater is back.... and OpenMoko is not.

I can't say that either of these things suprises me.

February 12, 2009

In the recent debate surronding the SOLID Principles of Object Oriented Design, the following two quotes stood out.

Last week I was listening to a podcast on Hanselminutes, with Robert Martin talking about the SOLID principles. ... And, when I was listening to them, they all sounded to me like extremely bureaucratic programming that came from the mind of somebody that has not written a lot of code, frankly.

Joel Spolsky

"Reading The Ferengi Programmer by Jeff Atwood really made me quite concerned. Here.s clearly an opinion which to me seems not grounded in sustained experience..."

Dhananjay Nene

Both of these are speculative slights on someone else's experience level, either generally or with a particular bit of technology. Bad rhetorical technique aside, my guess is that these are rooted in a fundamental lack of trust that the other side might actually have a well thought out reason for their point of view. This is an easy trap to fall into, particularly in a field as subjective as software design. Take the 'editor wars' as an example: which is better, Emacs, vi, or a full featured IDE? I don't know the answer to this question, but I do know that I can find people that will tell me I'm wrong for prefering Emacs. Change the debate to something a bit more relevant, something like the design of a large piece of software, and people get even more vitriolic.

At least part of the solution to this problem is plain, old trust. Think about a good developer that's moved into a lead role: it's easy to see how they might care enough about a particular design point to impose that on their team, either by implementing it themselves or by dictate. Where the trust comes in is in avoiding that trap. If I impose a choice on my team, I limit their ability to explore the design space themselves, take their own risks, and then potentially fail. I also limit their ability to correct my own misconceptions... if I think I'm right enough to mandate a design, I also probably think I'm right enough to ignore you and do my own thing anyway. Ironically enough, this makes the combination of conviction and risk aversion its own risk, and potentially a big one without a counterbalance. (From a personal level, if you go around imposing your will and/or ignoring points of view, you also lose the opporunity to learn from those around you.)

And this is where the bit about rhetorical technique comes into play. As satisfying as it can be to say that somebody you disagree with "...has not written a lot of code, frankly.", it's really beside the point. It doesn't matter, even were it true. What matters more to reasonable discussions about engineering technique are specific and testable statements: something like "Interface Segregation will help keep defect rates by promoting better unit tests." You may or may not agree with this statement, but it's more likely to lead to a relevant conversation than slights on experience or dogmatic declarations of opinion as fact. Several years ago I was told in no uncertain terms that I had made a design choice that 'wasn't scalable'. I ran some tests and came back with some numbers that showed my choice satisfied our requirements. Who do you think won that debate, the buzzword or the numbers? Specifics and testability can count for a lot. Dogma, not so much.

To be fair, most of the two blog posts I mention above are focused on meatier material than my quotes imply. I particularly liked Atwood's conclusion that "Rules, guidelines, and principles are gems of distilled experience that should be studied and respected. But they're never a substute [sic] for thinking critically about your work." For experienced developers, I expect that Nene would also agree. After all, he writes that "you have the experience on your side to generally make the right judgement calls and you are likely to anyway apply them under most of the cases." In a sense, both are arguing the same thing, namely that judgement ultimately drives the design process over strict adherence to a set of rules. The difference is that Nene takes it a few steps further and draws the conclusion that good developers produce good code, good code is SOLID, and Atwood's blog post is either useless or harmful. Maybe there are some valid points here, but they're obscured by a dogmatism that is more of a distraction than a productive way to think about software design.

November 6, 2008

In an era in which customers are almost begging Microsoft not to discontinue Windows XP, I was suprised to see a recent news story on the end of life of Windows for Workgroups 3.11 (WfWG). If you're not completely up on the early history of Windows, WfWG 3.11 was released in August of 1993, and was the last of the major US-market versions of Windows without native Win32 support out of the box. It was also one of a series of Windows releases in the early 90's that turned Windows from 'the library you need to run Excel' into a legitimate platform for general purpose computing.

From it's introduction in 1985 until the release of Windows 3.0 in 1990, Windows was almost entirely composed of the same basic core: DOS for file access and system startup, and a collection of three DLL's (KERNEL, GDI, and USER) for memory management, device independant graphics, and the GUI widget library and window manager. Atop the core sat programs written to the Windows API. All of this ran sharing the one 20-bit segmented address space provided by x86 real mode: with 640K usable memory. If you were lucky, you might have had a LIM/EMS board that allowed a few MB of extra memory to be addressed through a 64KB window at the top of the addres space. If you were really lucky, you might have had a 80386 computer with a special program that let it pretend its extra memory worked like a LIM/EMS board. Needless to say, memory was tight, difficult to use, and dangerous to share it between multiple programs.

The solution to this memory problem was initially to be OS/2. OS/2 was the operating system part of IBM's vast (and doomed) PS/2 program to recapture the PC space back from clone vendors. Like DOS, it was done in partnership with Microsoft, but IBM took a much more active role in the design and development of OS/2 than they did with DOS. OS/2's most noteworthy feature was the fact that it was designed to run in 80286 'protected mode' rather than the 'real mode' of DOS and Windows. Protected mode, like its name implies, added memory protection between processes that made multi-tasking more reliable. Protected mode also widened the physical address space of the CPU from 20-bits to 24-bits, making it possible to directly address 16MB of memory without resorting to tricks like LIM/EMS paging. This was all good, but it was tempered by the fact that OS/2 was expensive to run and didn't run DOS programs very well, thanks to its choice of 80286 protected mode over 80386. The only programs that could actually use the benefits of protected mode under OS/2 were OS/2-specific software that nobody had.

By the time 1988 rolled around, PC's with the capability of addressing more than 1MB of memory had been around since 1984, and there still wasn't a viable mainstream operating system that took advantage of this capability. This is when Windows got its big break: David Weise at Microsoft figured out how to run Windows itself in Protected Mode, along with unmodified Windows programs. Running existing software in protected mode was something of a holy grail, and Dr. Weise's idea ultimately resulted in Windows 3.0, released in 1990 to heady acclaim. Windows 3.0 also included the V86 multitasker from the older Windows/386 product. This meant Windows 3.0 could do things OS/2 could not do, like run multiple DOS programs at the same time and run them in graphical windows on the desktop.

Windows 3.0 ended up being a runaway sales success, and after its release, the rest of the dominos fell fairly quickly. Microsoft's partnership with IBM effectively ended, with IBM getting a source licence to Microsoft products through the early 1990's. IBM ultimately used this license to develop a special version of Windows they bundled with OS/2 2.0 to let Windows programs run under OS/2 ("a better Windows than Windows" went the ad). Microsoft's own 32-bit OS/2 2.0 got dropped, and the work done on OS/2 NT (3.0) ultimately formed the basis for 1993's Windows NT and the Win32 API. The next version of 16-bit Windows, Windows 3.1, dropped support for real mode entirely, and as it evolved into Windows 95, more and more system services were moved into 32-bit code. This 16/32-bit hybrid version of Windows lasted until Windows Me. It was definately barouque, and ended up notoriously unreliable, but its evolution from 256K 8088's to 128MB Pentiums is to my eye one of the more impressive examples of evolutionary software engineering. I don't miss using these versions of Windows, but it's easy to miss the 'brave new world' spirit they embodied.

August 18, 2008

Harry McCracken just wrote a bit comparing the price of PC's to Macintosh's. Like most of these guys, he misses the point. Consider his methodology: "I chose a standard [Apple] MacBook configuration...Then I configured laptops as similarly as possible from the country's two largest PC manufacturers". The problem is that this methodology takes the set of Apple machines to be the set of valid configurations for comparison, excluding configurations that Apple does not offer. Just for the sake of a more full comparison, what does a MacBook cost with these configurations?

  • A TrackPoint.
  • A numeric keypad.
  • Two internal batteries.
  • Two internal mouse buttons.
  • A swappable drive bay.
  • A docking station.
  • A calibrated, high-gamut display and a digitizer.
  • A display smaller than 13" or bigger than 17".
  • No keyboard.
  • A convertable tablet configuration.
  • Embedded on a PXI card.
  • The absolute highest performance.
  • The absolute minimum cost.

Of course, none of these configurations are available from Apple. If you need or want one of these options, you can't get it at any price from Apple. Similar comparisons can be made in the server and desktop PC spaces.

This is an unsuprising result. When you enlarge the playing field beyond Apple's relatively limited reach, it becomes even more apparant that these comparisons aren't 'Apple vs. PC' what they really are is 'Apple vs. The Entire Computer Industry'. Apple doesn't have the capability, desire, or brand to fare well in such a comparison: There are just too many market segments they don't address. Addressing all of these segments would leave them with a confusing product line, a highly taxed engineering group, and a muddled brand image.

Part of the value of the PC platform is that it not subject to the limitations of being confined to one highly image-sensitive company. Part of the value of the PC is that it allows other vendors to enlarge the platform into new segments. Missing out on this is one of the costs of picking an Apple that is missing in most comparisons, including McCracken's.

August 12, 2008

I have a new favorite blog, the Linux Hater's Blog. Some anonymous Linux user has taken it upon himself to open a blog dedicated to all of the many reasons why desktop Linux sucks (which it does). While it's more than a little mean-spirited, this blog is the dissenting voice of Linux. It is the conscience that, if heeded, will make the Linux desktop a better place to work.

For all of the problems with Linux, it is also the one major platform that allows the motivated individual or company to actually address those problems. The single biggest difference between the Linux Hater's Blog and the (would-be) Windows and MacOS X Hater's Blogs is that on the Linux blog, it's actually possible to do something about the problems. Consider this: 18 years ago, there was no Linux, 12 years ago, there was no Gnome. In 1990, the Linux Hater's blog would have one post: "It doesn't exist, go buy Windows." The reason I mention this is that while it's easy to dismiss the benefits of open source as purely theoretical (i.e.: "Have you ever needed to recompile your kernel?"), the benefits of open source are the entire reason it exists at all.

To look at this in a bit more depth, consider the gnome-panel as an example. Based on the copyright claims in the source code, gnome-panel is itself a collaboration of Eazel, Helix Code/Ximian/Novell, Sun Microsytems, Red Hat, The Free Software Foundation, Ian McKellar, James Wilcox, Rob Adams, Vincent Untz, and Carlos Garcia Campos. All of these contributors found things to change or fix, 'itches to scratch', and all of them changed or fixed the gnome-panel. This is something that basically cannot happen in the model of closed source software. If you want to change something in MacOS X, you basically have three options: try to convince Apple it is a worthwhile change by trying to present (giving up the rights to) a business case justifying the feature, try to go to work for Apple in the right group and convince them to let you implement your feature, or reimplement the entire thing yourself.

As a result of these kinds of trade offs, cross-organization collaboration in closed source is a lot harder to come by than in open source. Closed source essentially divides the stakeholders in a piece of software into two groups: those that can take responsibility for the softawre by making changes, and those that cannot and must either accept the changes as provided or work around them. In that sense, Free Software is the licencing model that brings to software the democratic ideals of personal responsibility and the sovereignty of the people. Like any democracy, in the short term it will have issues compared to more centralized forms of planning, but in the long term it will be a much more vibrant and productive place to be. This is also why the Linux Hater's Blog is so very important. To see why, continue the analogy with democracy a bit, and consider the process by which the United Stated adopted its constitution.

After the U.S. Constitutional Convention in Philadelphia, there came the long and highly political process of states ratifying new form of government. During this two year long debate, there were a series of papers, the Federalist Papers, written in support of the proposed Constutition. Less well known are the Anti-Federalist papers, a series of dissenting arguments against ratification. This dissent primarily centered around the lack of a Bill of Rights, and ultimatly led to the incorporation of a Bill of Rights as the first ten amendments to the constitution. The dissent was not just criticism: open process and free debate allowed it to be a key part of the construction of the Constitution.

This is a much grander version of what the Linux Hater's blog can do for Linux. By dissenting against the idea that Linux is already ready for the desktop (or the server), it also provides a list of weaknesses to fix.
Unlike a Windows Hater's Blog, the freedoms of Linux allow this list of weaknesses to effectively become a to do list for anyone or any company with the motivation and time to do the work. It is therefore not a liabilty to Linux, but an asset that derives its value from the freedom at the core of Free Software. Ironically enough, because of this, the 'Linux Hater' could easily turn out to be one of Linux's best friends.

Older Articles...