Since my last post, I dropped by an Apple Store to take a look at the 2015 MacBook. It is difficult to overstate how startlingly small the new machine is in person. I may be biased by the internal specifications, but the impression is much more 'big tablet' than 'small laptop'. The other standout feature was the touchpad. It continues Apple's tradition of high-quality touchpad implementations, removes the mechanicical switch and hinge, and adds force sensititivy and haptic feedback. The mechanical simplifications alone are a worthwhile improvement.
I also spent some time typing on the keyboard. It's as shallow as you'd think, but the keys are very direct have a positive feel. There's none of the subtle rattling found on most small keyboards and it registered every keypress. I'm not completely convinced yet, but it at least seems possible that this type of keyboard could become the preferred keyboard for some typists.
The performance of the machine is also a point of interest. Even the lightly loaded demo machine on the showroom floor had a few hiccups paging the display around from one virtual desktop to the next. Maybe it's nothing, but it does make me wonder if the machine can keep up with daily use, particuarly after a few OSX updates have been released. (For me, I think it'd be fine, but I spend most my time in Terminal, Emacs, and Safari, none of which are exactly heavy-hitters.)
reddit this! Digg Me!
A few more-or-less random observations about our industry, and various recent product releases:
- Google Nexus 6 - Nice phone, but they lost one of the biggest competitive differentiators of the older 4 and 5 models: the price, and they did it with a more-niche phablet-sized device. Seems like a big mistake.
- 2015 MacBook - Assuming they didn't screw up the keyboard too badly, Apple has something special here. Powerful enough to do what most people need, a Retina display, and highly portable. The single port seems like genius, in that it eliminates the need/desire for devices like the Henge Dock. It'll also be interesting to see how many of the new MacBook design elements make it to the Pro and Air.
- Apple Watch - Another display to serve as a distraction, another battery to keep charged, and another device to maintain and upgrade? To me, the marginal benefit of carrying and maintaining the watch does not sufficiently offset the marginal costs. If it's more about style than function, then Apple has to make a compelling argument that the Edition watch is really worth 50 times the basic watch, despite the fact it's stylistically so similar.
reddit this! Digg Me!
Windows 3.0 also had the benefit of a huge installed base of latent and mostly unused hardware. A typical business PC in 1990 might have been something like an 80286 with 2MB of RAM, a 40MB disk, and an EGA (640x350x4bpp) bitmapped display. It would then be running DOS software that basically couldn't address more than the first 640K of memory, and tf you ever saw the bitmap display in use, it was probably for a static plot of a graph. Compared to a Macintosh from the same year, a PC looked positively like something from a totally different generation. Windows 3.0 changed all this. It allowed you to switch your 80286 into 'Protected Mode' to get at that extra memory. It provided a graphics API (with drivers!) and forced programs to use the bitmapped display. It provided standard printer drivers that worked for all Windows programs. Basically, for $89 it took the hardware you already had and made it look almost like the Macintosh that would otherwise have cost you thousands of dollars. It was utterly transforming.
Almost 20 years later, the most interesting thing about this is the relative timing of the hardware and its software support. Most of the hardware in my 'typical 1990 PC' was introduced by IBM in its 1984 announcement of the IBM PC AT. The first attempt by IBM and Microsoft to support the 80286 natively came three years later in 1987's release of OS/2. The first 286-native platform to reach mainstream acceptance came in 1990. Think about that: it took 6 years for the open PC market to develop software capable of fully utilizing the 80286. The 80386 fared even worse; The first 386 machine was released in 1986, and it didn't have a major mainstream OS until either 1993 or 1995 (depending on whether or not you count Windows NT 3.1 as 'mainstream'). Thus, there were scores of 286 and 386 boxes that did nothing more than execute 8086 code really, really fast (for the time :-)). In modern terms, this is analogous to a vendor introducing a hardware devide today and then delaying software support until 2018.
This is emblematic of the hugely diminishing value of an open device platform in today's computer industry. In 1989, using a computer was largely an exercise in getting the damn thing to work. When those are the issues you're worried about as a PC user, an open platform is helpful because it enables a broader selection of vendors for parts and software. If you've run out of slots for both your video board and your bus mouse interface, you can always switch to an ATI video board with a built-in mouse port. If you need a memory manager that supports VCPI to enable your V86 multitasker, you can always switch to something like QEMM/386. If you need more memory to run your spreadsheet, you can go to AST Technolgies and buy a LIM/EMS board. When you're worried about these kinds of issues, issues 'low in the stack', the flexibility of choice provided by openness is useful enough that you might be more willing to bear the costs of a market slow to adopt new technologies.
Of course, price is also a factor. In 1988, Byte magazine ran a review of Compaq's Deskpro 386s. This was their first 80386SX machine, a desktop computer designed to be a cheaper way to run 80386-specific software. The cost of the review machine was something like $15,000. In 2007 dollars, this would buy you a nice, reasonably late-model BMW 3-series. A year later in 1989, my family bought a similar machine from ALR, which cost around $3,000. Thus isn't nearly as bad, but it's still around $5,200 in 2007, which basically means that a mid-range 1989 PC is priced at the very top end of the 2007 PC market. With monetary costs that high, that other benefit of openness, price competition, becomes a much bigger deal. Compaq ended up suffering badly as competition drove the price of the market to where it is today.
In the intervening 20 years, both of these circumstances have changed dramtically. PC's, both Windows and Macintosh, are well enough integrated that nothing needs to be done to get them to run aside from unpacking the box. NeXTStep, which in 1994 required a fancy $5,000 PC bought from a custom vendor to run well, will shortly be able to run (with long-range, high-speed wireless!) on a $200 handheld bought at your local shopping mall. Our industry has moved up Maslow's hierarchy of needs from expensive, unreliable hardware, run by the dedicated few to cheap, reliable hardware, run by disinterested many. We can now concentrate on more interesting things than just getting the computer to work, and tt is with this shift that the some of the unique value of openness has been lost. Unfortunately, the costs have been retained, there is no countervening force in the market that's forcing open platforms to move any faster.
Personally, I believe this bodes very well for Apple's latest attempt to own the smartphone space. There will only be one vendor and one price for the iPhone, but the platform will be able to move faster to adopt new technolgies, and integrate them more tightly, because there's only one kind of hardware to run on. The fewer hardware configurations and stricter quality control guidelines will make it easier (and more mandatory) that developers produce high quality software. The fact that entry into the software market is controlled, doesn't matter, because there are still more eligable developers than the platform actually needs. The net result of all this is that Apple, again, has a product that looks 'next generation', but the pricing and openness factors that cost them that advantage in the early 90's are no longer there. It's a good time to be involved in the iPhone, methinks.
reddit this! Digg Me!
reddit this! Digg Me!
What ultimately drove me to leave that role is something that I think most technical jobs, particularly those in product development, have in common: a severe risk of detachment from your clients. Software developers, myself included, tend to be an introverted lot; Even if they weren't, it's oftentimes percieved to be in the best interests of a software house to keep software developers on task developing software. In other words, there are both personal and corporate pressures to keep developers hacking away at the code instead of talking to customers. The risk here is that the people who best understand the products are the people that potentially least understand the customers. I can tell you from firsthand experience that, while I knew in detail the sampling characteristics of the device I was building, I had no idea how it might be used in a chemical plant to measure a temperature sensor and control a pump or a heater. It's easy to develop a product in that kind of vacuum and then get it totally wrong for your market. If you're not careful, it's also easy to get your product totally wrong for your own company, which is what arguably happened to the group I was in.
Organizations counter this risk of specialization by having other job roles that can be more focused on customer needs and less focused. From the perspective of someone sitting in an R&D lab, these other jobs represent the steps a product takes out the door towards doing useful work for a customer. A researcher discovers a new technology or technique, a developer turns it into a product, a marketer figures out how to promote it, a salesperson sells the product, and finally, a consultant integrates it into the customer's system. As work moves down this path, it gets progressively more applied and progressively less abstract. The reverse is true too: the further away from pure research you get, the closer you get to customer requirements. As much as a development lab has the responsibility to push product out to customers, customer-facing staff has the responsibilty to push information on customer requirements back to the lab to drive product development priorites. If a developer isn't talking to a customer and a consultant is, then it's a safe bet that the consultant has a better idea of what a product needs to do to sell.
This is the reasoning that led me out of pure software development and into a consulting role at a different firm. In this new role, I was on projects developing configuration websites for computer resellers. If you envision Dell's online computer configuration tool, you're not far off from the kind of websites I was developing. While consultants at this company still did a bit of programming, the theory was that the heavy lifting of actually building the software was done in the company's R&D lab. Consultants were to focus on more basic customization and integration work. On my projects, most of my software devlopment work was limited to customizing the layout of web pages and writing interfaces to databases and authentication providers. Interesting stuff, but not close to the same technical league as what I was doing in my previous job.
The risks in this kind of internal consulting role are different than the risks in a purely development role. Unlike a developer sitting in an R&D lab, someone who might get to see a customer once a month or two at best, a consultant quite often is working on-site with the customer on a daily basis. In fact, It's easy for a consultant to see customer staff far more often than other employees of their own company. Of course, this potential isolation also includes the R&D lab and the sales group. In the worst case scenario, you end up with three independant silos in your organization: sales selling whatever they want, developers developing whatever they want, and the poor consultants caught in the middle, between an over-ambitious contract and a under-done base product. I shared an office with a guy working on a project that was sold as a one month customization of one of our other base products. When I joined the firm, this project was in its 18th month of (mostly non-billable) consulting time. There was obviously a lot that had gone horribly wrong on that project, but foremost was a total disconnect between what the salespeople thought they had ready to sell and what the developers had actually produced to sell. (Not that the consultants were blameless on that project either: there were huge estimation and process problems in the customization work plan.)
I do not know of a role that is completely free of these types of risks, but my experience has tended to be that the difference between success and failure in any role is more related to communication with those around you than it is to technical skills. It is as much about giving your stakeholders what they want when they want it as it is anything else (including giving them something 'great'). This can be a difficult pill to swallow since it places emphasis on skills that do not come naturally to many developers. If you're a developer used to setting the development agenda it's even worse, since it might involve ceding at least some of this power to people downstream and closer to customers. However, if you're really good, you will do whatever it takes (even it it's 'not your job') to know your customer's business well enough to anticpate what they need before they request. Either way, success is ultimately about pressing your boundaries beyond your comfort zone to get what you need to do the thing you love in a way that satisfies those that care about your work.
reddit this! Digg Me!
I wrote this list of good software books a few years ago, in an effort to write down books that I've found to be interesting and/or useful. None of these books are of the 'X in Y days' variety, and very few qualify as reference books likely to be useful in your day to day work. That said, move of them will still make you a better developer.
This book is a classic reference on how code should be constructed, when working at the level of program statements and functions. While it does not cover some of the newer programming constructs or languages, this book should be considered essential reading for any working programmer.
As the author puts in the preface, this book talks about the more glamourous aspects of programming. Culled from a series of articles in Communications of the ACM, Bentley presents a series of insights on solving common programming problems.
This is the series on the common algorithms used in software. Knuth presents, in gory detail, a series of algorithms and the mathematics on which they depend. You probably should not expect to read (or understand) everything that's here, but if you're in a bind, these books can be invaluable references. Robert Sedgewick also has an alternative that might be more accessible.
The definitive reference on compiler design. If you have to do anything with expression parsing, analysis, or translation this is a good introduction.
C is a language notoriously full of little oddities that can make programmers' lives more difficult. This book documents a lot of them, explaining a little about why the language works the way it does. Reading it is a good way to help become a more seasoned user of C (and by implication C++). Expert C Programming by Peter van der Linden also appears quite good too, although I've not read it all the way through.
Ever wonder why C++ works the way it does? This book, documenting the history of C++ and C with Classes, talks a lot about the design decisions made while the language was being developed. It's a dry read at times, but understanding the intent of the designers of a tool can really help while using the tool.
In the C++ world, this book is old: It predates the ANSI standard, and many of the features introduced later on in the standardization effort (like STL). Despite that, Coplien describes techniques that give C++ programs a level of flexibility approaching that of languages like Lisp and Smalltalk. He covers garbage collection, multi-method dispatch, and replacment of code at runtime, among other techniques. If you need those capabilities, I'd reccomend another language. If you have to use C++, I'd reccomend this book.
This is one of the most detailed books on the core Win32 API I've ever seen. If you need to work with Win32, you need good documentation, and this is the book to get. Not only does it talk in great detail about API functions, it also talks about some of the differences between the various platforms extant in 1997. I can't reccomend it enough, but I do hope it gets updated to include Windows 2000 and XP.
First things first: this book is centered around Windows 3.1. While a lot has changed, a lot has stayed the same about the Windows API. Keeping that in mind, DiLascia's narrative through the process of encapsulating the Win16 API in C++ is a perfect example of how to manage the complexity of a difficult API using an object oriented programming environment. Even if you never write a Windows framework yourself, it's still good reading on the issues involved in both Windows programming and framework design in general.
A good introduction to a wide variety of graphics algorithms and techniques. Serious work in the field will require more specialized books, however.
A good, and beautiful, discussion on how to graphically depict numerical information, and do so reliably and clearly.
Not really a visualization or graphics book, except for the material covering the AT&T GraphViz (dotty and lefty) toolset. If you're programming with data structures more elaborate than lists, this tool can be an easy way to interpret data structures dumped to disk. With lefty, it's even possible to program specific user interfaces to edit graphical displays.
This book is dense, but essential reading. With the two biggest emergent platforms (Java and .Net) depending on advanced runtime support, these more advanced runtime engines are now a fact of life. While this book focuses on the shared source implenentation of the .Net runtime, it is an excellent introduction to the ideas behind Java, .Net, and even languages like Smalltalk, Scheme, and Lisp.
This book documents something called the Metaobject-protocol as found in the object oriented part of some Common Lisp implementations. If you need to use metaobjects, it's of course a useful book, but it's also informative in general about the implementation of dynamic languages. I particularly liked the coverage on method dispatch and selection.
While it's been superceded by the ANSI Hyperspec for Lisp, as a reference for Common Lisp, this is still a good reference to the language. For programmers in general, and particularly language implementors, this book is also full of good ideas on algorithms and software design.
Macros (code-rewrite rules) are one of the hardest things to understand in Common Lisp, and other similar languages. Graham goes into a lot of detail about how they can be used in real-world development, and includes a bunch of good examples. It is available free of charge online.
This is one of those books that has the potential to seriously change the way you think. Written as a textbook for an introductory programming course at MIT, this book talks a lot about varying styles of computation and how they can be used.
reddit this! Digg Me!
CREATE TABLE COMMON.SAMPLE_TABLE ( NAME VARCHAR(64) NOT NULL, STATUS CHAR(1), X_C NUMBER (10), Y_C NUMBER (*,10) NOT NULL, Z_C NUMBER (*,10), FOO VARCHAR2 (18) NOT NULL, BAR DATE, BAZ TIMESTAMP ) /Once the table is created, it is then possible to ask the database to describe the table:
common@XE> desc COMMON.SAMPLE_TABLE; Name Null? Type ----------------------------- -------- ---------------------------- NAME NOT NULL VARCHAR2(64) STATUS CHAR(1) X_C NUMBER(10) Y_C NOT NULL NUMBER(38,10) Z_C NUMBER(38,10) FOO NOT NULL VARCHAR2(18) BAR DATE BAZ TIMESTAMP(6)For some reason, the syntax of Oracle's description of the table's definition is entirely different than the syntax of the DDL used to define the table in the first place. Not only does the description not use DDL, minor details are different too. For example, the relative placement of the nullability (NOT NULL) of a column and its data type is reversed from one representation to the other. This makes converting a table description into corresponding DDL a trickier process than it would be otherwise. Another difference (loss?) is that the DDL syntax allows for table specific attributes and the description syntax does not. That means that the table's full description really might look something like this:
CREATE TABLE COMMON.SAMPLE_TABLE ( NAME VARCHAR(64) NOT NULL, STATUS CHAR(1), X_C NUMBER (10), Y_C NUMBER (*,10) NOT NULL, Z_C NUMBER (*,10), FOO VARCHAR2 (18) NOT NULL, BAR DATE, BAZ TIMESTAMP ) LOGGING NOCOMPRESS NOCACHE NOPARALLEL MONITORING /So, if you rely on a table description as the basis for creating a duplicate copy of a table, you not only have to do specific work to convert the description from description syntax to DDL, the DDL you end up with will likely be incomplete. While I am sure that there is an excellent reason for the syntactic split between the two types of table descriptions, I honesly cannot think of it. My current best theory is that SQL*Plus and SQLNET cannot handle non-table returns from a database request. Because of this, the table description has to itself be a table. You could even make the argument that this is the 'right' way to do things, since it gives you a table description in a form (a table) that database code should easily be able to manipulate. However, the description is itself incomplete, so I'm not sure how useful that explanation is.
I'm not a database guru, but it seems like another way to handle this possible limitation is to have the table description query return a one row, one column table with a BLOB or VARCHAR2 containing the DDL description. SQL*Plus could then special case the display of this query to make it look nice on the screen. (SQL*Plus already does special case desc queries, since their display does not honor calls to SET PAGESIZE. If you really do need table information in tabular form, there are always the ALL_TABLES and ALL_TAB_COLS views. (Of course, a really wonderful solution to all of this would be to make those views writable, somehow standardize them, and then skip the DDL entirely. :-)
reddit this! Digg Me!
To save folks like me from themselves, Oracle has a password recovery service in their website. This service works like most other similar services: enter your e-mail address and it then resets your password and sends the new one to your e-mail account. What their service does not do is trim the leading and trailing spaces off of the entered e-mail address. It just tries to look up whatever you entered in the password database. If you happen to have a trailing space on your address, it will not recognize it and not do the password reset. Since the website uses a proportional font, and spaces are quite narrow, it's easy to miss the error and you will likely be left wondering why Oracle forgot your account. This seems perfectly in keeping with Oracle's seeming effort to keep their site as obtuse as possible in comparison with Microsoft's developer site. Seriously guys, you sell a platform: Its value is directly proportional to the number of developers coding for it and the amount of code written to it. Make it as easy as possible, and it will only help your bottom line. Microsoft understands this, why don't you?
reddit this! Digg Me!
The work I had done through RAC was pretty simple and easy to specify: extract a specific table from each of 1,200 PDF files and translate it to CSV format. The difficulty is that these were scanned PDF files and not generated with embedded text data, which made it a very manual process. Image quality was low enough that OCR software was highly unreliable.
After dividing the work into 11 phases, I submitted a bid request and got around 20 bids within a couple days. The bids all ranged between 50-100% of my maximum amount and, with one exception, were entirely from coders in Pakistan, India, or Romania. The exception, a guy in Colorado, backed out as soon as he saw that the PDF files were scanned rather than textual. If you are an American paying for an American lifestyle, and are considering offering your services through RAC, you ought to consider with whom you're competing. You need to be very specialized or willing to work for low wage rates to be competitive. From the point of view of a buyer on RAC, the coder could be located on Mars and it wouldn't really matter all that much: this is true globalization at work.
Anyway, the rest of my RAC story is that there is no story. Over the course of about a month, I entered and completed RAC contracts for all 11 phases of work plus a re-entry phase for verification. Basically all of the contracts were on time, on budget, and of high quality. RAC did it's job and there were no problems. Admittedly, my bids were for limited, easily specified work, but I'd do it again in a heartbeat.
reddit this! Digg Me!
For over a decade, there's always been some mystical missing piece of technology that was holding Linux back from mainstream acceptance. For a while personal finance software filled this role, later on it was a unified desktop, and later still, when KDE and Gnome reached stability, the need for good office software took up the baton. Fast forward to 2006 and the "one missing thing Linux needs to become mainstream" is apparantly a good equivalent to Microsoft Outlook.
This line of reasoning is seductive to programmers: it basically transforms the question of "What does Linux need to become mainstream?" into the question "What code do I need to write?". After all, if the only thing holding Linux back from mainstream acceptance is a piece of code, then a missing piece of code is easy for a programmer to fix. In fact, since 1997 (the last time I ran Linux full time), basically all of the 'missing pieces' I mentioned above have been rewritten or created anew. If an integrated graphical desktop with a functional office suite was really the key to mainstream acceptance, then Linux should already be there. 10 years ago that was the belief and 10 years later that belief was basically proven completely wrong. When was the last time you saw Linux running on anything other than a server or in some other relatively fixed-function application? Waving a magic wand and integrating Evolution with Exchange won't change this any more than any of the other scapegoats that have taken the blame for Linux's niche status in the past.
It'd be easy to blame this on changing times: after all, who knew that Exchange integration would be so key to Linux's mainstream acceptance a few years ago? Actually, that'd be anybody in IT with a pulse. As Todd points out, Outlook 97 ("It's nearly a decade old, for crying out loud.") runs under Wine and provides Exchange integration. Put another way, the Linux desktop software stack hasn't interoperated natively with the mainstream PC software vendor's e-mail solution for over nine years. This isn't a short-term problem: this is innate. Another example of this kind of long-running problem is the ongoing trouble finding modern video drivers for X11. As much as people complain about ATI's lousy video drivers, it's only a repeat of the same thing that happened with Neomagic and Diamond back in the mid-90's. The names are different, but the problem and result are exactly the same: when buying hardware, caveat emptor if you want modern graphics support.
In both of these cases, it's easy to assign blame to closed propriatary vendors. You could also argue that it's just a symptom of OSS developers wanting to work on whatever's 'coolest', rather than what needs to get done. However, with either explanation the problem is inherent with open source, and the net effect is the same: Linux gets most of the way to where it needs to be, but it gets there late and and fails to go the last mile or two. It's this last mile that's so important to mainstream acceptance, and getting through it is going to take a lot more than one or two pieces of code. I have no doubt that five years from now, Linux will have decent Exchange integration and excellent support for the ATI R300 graphics in my (then) six year old laptop. Too bad the battle for the desktop will have moved on.
reddit this! Digg Me!
"Like I said in January, the book brought back memories of the first few years of my career. Obviously the details are different, but if my kids ever ask about the beginning of my career I'll point them to your book first. You did an excellent job capturing the spirit of commercial software development."
I re-skimmed the book last night (setting aside a brand-new copy of C.J. Date's Database in Depth, ironically enough), and it still seems relevant, six months after the initial read. That's always a good sign.
reddit this! Digg Me!
Why even bother trying? A Microsoft PM documented this description feature. A programmer coded it. A tester tested it. A writer wrote the text. Another writer documented the feature. Translators translated it into 60 different languages. All so that Windows could show a description that is exactly as useless as the name of the service.
The smallest disk Dell sells on their cheapest laptop is 40GB. Is it really too much to ask for the description to include text closer to this?
For bonus points, the service description would also be available from the fault notification dialog box, possibly with hyperlinks to the appropriate developer information on MSDN (which could also be stored locally). The whole key here: trust the user of your product and enable them to use it efficiently.
reddit this! Digg Me!
Seriously, even if Linux were the $300 closed-source monopolistic OS and Windows XP was the free, open-source alternative, I'd be seriously considering switching away from Windows.
Oh, and of course Acrobat reader had to get the last word in. After the restart induced by the last dialog box, I got this:
reddit this! Digg Me!
The use case, of course, is for portable devices that can charge their batteries over a USB connection. Apple has stopped bundling anything except a USB cable with the iPod, and it's pretty easy to find USB power cords for PDA's and cell phones. With this kind of functionality in a laptop, the laptop could become a traveler's one central power adapter, eliminating the need to carry large numbers of dedicated adapters or half-baked partial solutions like the iGo. I can see paying 50-60 dollars extra for a laptop with this feature.
Another way this could be implemented would be to put 3-4 power-only USB ports on the laptop's power brick.
reddit this! Digg Me!
On the surface, this is a true statement. Since XML's introduction it's shown up virtually everywhere: XML has been used for everything from configuration files to RPC protocols. Better still, all these XML documents have the same understandable syntax and can be parsed by the same, standard tools (which exist virtually everywhere). If you want to work with XML files, chances are your favorite text editor has built in XML support; If you want something more structured, Excel has a powerful XML import capability, as do most databases. However, as nice as all this is, it glosses over one fundamental fact: XML, by itself, doesn't mean anything.
Saying that a document is in XML is basically the same as saying it's in CSV: it implies the format that contains the data, but it doesn't imply anything about the data itself. There are still a bunch of unresolved questions: What tags are supported? How are attributes parsed? What do the tags actually mean? These are the types of questions you'll find yourself asking about five seconds after receiving an XML document in a new format. While some of these questions can be answered by a Schema or DTD, the last question, the key question, isn't addressed by a schema at all. As anybody who has had to reverse engineeer an otherwise unknown XML document can tell you: even if you know a document is in XML, the things you really care about are still left unspecified.
So, XML isn't really the 'language' Mr. Maraia states it to be, which is why his comment is so optimistic. While it is true that XML is useful and universal, you'll also need to learn schemas for the documents you'll be working with; That's where the bulk of the work will be. (Syntax is generally an easy thing to learn, and developing syntax processing code is a well understood branch of computer science.) So, while you should learn XML, if learning XML itself is the kind of decision you have to mull over, you probably aren't prepared for steps you'll be taking immediately after learning XML. (This is particularly true if you're using XML to configure build tools, as in Maraia's book. If you're in that role and are having trouble with XML, you should just quit now.)
Erik Naggum, of comp.lang.lisp fame, summed this up quite nicely:
reddit this! Digg Me!
If you haven't seen it before, this dialog box basically means that Windows has downloaded a system update that needs to restart the system to install. Once it appears you have three choices:
- Do nothing - The system will wait for five minutes, filling in the progress bar, and then forcibly restart your computer.
- Click 'Restart Now' - Your computer will restart now.
- Click 'Restart Later' - The dialog box is dismissed and will reappear in 5-10 minutes.
To see why this is true, consider what the dialog box does not have:
- There is no way to see a list of what updates are being installed.
- There is no way to control how long the system restart is deferred.
- There is no way to launch the page of the system control panel that controls automatic update.
reddit this! Digg Me!