Mike Schaeffer's Blog

Articles with tag: programming
February 12, 2009

In the recent debate surronding the SOLID Principles of Object Oriented Design, the following two quotes stood out.

Last week I was listening to a podcast on Hanselminutes, with Robert Martin talking about the SOLID principles. ... And, when I was listening to them, they all sounded to me like extremely bureaucratic programming that came from the mind of somebody that has not written a lot of code, frankly.

Joel Spolsky

"Reading The Ferengi Programmer by Jeff Atwood really made me quite concerned. Here.s clearly an opinion which to me seems not grounded in sustained experience..."

Dhananjay Nene

Both of these are speculative slights on someone else's experience level, either generally or with a particular bit of technology. Bad rhetorical technique aside, my guess is that these are rooted in a fundamental lack of trust that the other side might actually have a well thought out reason for their point of view. This is an easy trap to fall into, particularly in a field as subjective as software design. Take the 'editor wars' as an example: which is better, Emacs, vi, or a full featured IDE? I don't know the answer to this question, but I do know that I can find people that will tell me I'm wrong for prefering Emacs. Change the debate to something a bit more relevant, something like the design of a large piece of software, and people get even more vitriolic.

At least part of the solution to this problem is plain, old trust. Think about a good developer that's moved into a lead role: it's easy to see how they might care enough about a particular design point to impose that on their team, either by implementing it themselves or by dictate. Where the trust comes in is in avoiding that trap. If I impose a choice on my team, I limit their ability to explore the design space themselves, take their own risks, and then potentially fail. I also limit their ability to correct my own misconceptions... if I think I'm right enough to mandate a design, I also probably think I'm right enough to ignore you and do my own thing anyway. Ironically enough, this makes the combination of conviction and risk aversion its own risk, and potentially a big one without a counterbalance. (From a personal level, if you go around imposing your will and/or ignoring points of view, you also lose the opporunity to learn from those around you.)

And this is where the bit about rhetorical technique comes into play. As satisfying as it can be to say that somebody you disagree with "...has not written a lot of code, frankly.", it's really beside the point. It doesn't matter, even were it true. What matters more to reasonable discussions about engineering technique are specific and testable statements: something like "Interface Segregation will help keep defect rates by promoting better unit tests." You may or may not agree with this statement, but it's more likely to lead to a relevant conversation than slights on experience or dogmatic declarations of opinion as fact. Several years ago I was told in no uncertain terms that I had made a design choice that 'wasn't scalable'. I ran some tests and came back with some numbers that showed my choice satisfied our requirements. Who do you think won that debate, the buzzword or the numbers? Specifics and testability can count for a lot. Dogma, not so much.

To be fair, most of the two blog posts I mention above are focused on meatier material than my quotes imply. I particularly liked Atwood's conclusion that "Rules, guidelines, and principles are gems of distilled experience that should be studied and respected. But they're never a substute [sic] for thinking critically about your work." For experienced developers, I expect that Nene would also agree. After all, he writes that "you have the experience on your side to generally make the right judgement calls and you are likely to anyway apply them under most of the cases." In a sense, both are arguing the same thing, namely that judgement ultimately drives the design process over strict adherence to a set of rules. The difference is that Nene takes it a few steps further and draws the conclusion that good developers produce good code, good code is SOLID, and Atwood's blog post is either useless or harmful. Maybe there are some valid points here, but they're obscured by a dogmatism that is more of a distraction than a productive way to think about software design.

February 22, 2008

The instructions I gave earlier on Renaming SVN Users work only when the SVN repository is hosted on a machine that can run SVN hooks written in Unix style shell script. On a conventional Windows machine, one without Cygwin, MSYS, or similar, you have to switch to writing hooks in something like Windows batch language.

If all you want to do is temporarily rename users, then you can just create an empty file named pre-revprop-change.cmd in your repository under hooks\. The default return code from a batch file is success, which SVN interprets as a all revision property changes, all the time, by anybody. If you want to implement an actual policy, Philibert Perusse has posted a template script online.

February 11, 2008

I've been keeping track of the vCalc source code in an SVN repository since May of 2005. While I'm the only person who has ever committed code into the repository, I've developed vCalc on three or four machines, with different usernames on each machine. Since SVN records usernames with each commit, these historical usernames show up in each svn log or svn blame. svn blame is particularly bad because it displays a code listing with the username prepended to each line in a fixed width gutter. With some usernames longer than others, usernames that are very long can exceed the width of the gutter and push the code over to the right. Fortunately, changing historical usernames isn't that hard, if you have administrator rights on your SVN repository.

SVN stores the name of a revision's committer in a revision property named svn:author. If you're not familar with the term, a revision property is a blob of out of band data that SVN attaches to the revision. In addition to the author of a commit, they're also used to store the commit log message, and, via SVN's propset and propget commands, user-provided custom metadata for the revision. Changing the name of a user associated with a commit basically amounts to using propset to update the svn:author property for a revision. The command to do this is structured like so:

svn propset svn:author –revprop -rrev-number new-username repository-URL

If this works, you are all set, but what is more likely to happen is the following error:

svn: Repository has not been enabled to accept revision propchanges; ask the administrator to create a pre-revprop-change hook

By default, revision property changes are disabled. This makes sense if you are at all interested in using your source code control system to satisfy an auditing requirement. Changing the author of a commit would be a great way for a developer to cover their tracks, if they were interested in doing something underhanded. Also, unlike most other aspects of a project managed in SVN, revision properties have no change tracking: They are the change tracking mechanism for everything else. Because of the security risks, enabling changes to revision properties requires establishment of a guard hook: an external procedure that is consulted whenever someone requests that a revision property be changed. Any policy decisions about who can change what revision property when are implemented in the hook procedure.

Hooks in SVN are stored in the hooks/ directory under the repository toplevel. Conveniently, SVN provides a sample implementation on the hook we need to implement in the shell script pre-revprop-change.tmpl, but the sample implementation also has strict defaults about what can be changed, allowing only the log message to be set:

if [ "$ACTION" = "M" -a "$PROPNAME" = "svn:log" ]; then exit 0; fi

echo "Changing revision properties other than svn:log is prohibited" > &2
exit 1

The sample script can be enabled by renaming it to pre-revprop-change. It can be made considerably more lax by adding an exit 0 before the lines I list above. At this point, the property update command should work, although if you're at all interested in the security of your repository, it is best to restore whatever revision property policy was in place as soon as possible.

January 21, 2008

Another one along the lines of My last post. I tried to compile this source file today, using the compiler in my little Lisp:

(define (values . args) (%panic "roh roh"))

(define (test x) (+ x 1))

I got the following result:

d:\test>vcsh -c test.scm
;;;; VCSH, Debug Build (SCAN 0.99 - Dec 17 2007 16:47:30)

; Info: Loading Internal File: fasl-compiler
; Info: Package 'fasl-compiler' created
; Info: Loading Internal File: fasl-write
; Info: Package 'fasl-write' created
; Info: Loading Internal File: fasl-compiler-run
; Info: Package 'fasl-compiler-run' created
; Info: stack limit disabled!
Fatal Error: roh roh @ (error.cpp:168)

Needless to say, fatal errors still aren't any good. However, this one is a bit more interesting than a simple type checking problem. The function %panic is the internal function used to signal fatal errors from Lisp code. The first definition above redefines values, the function to return multiple return values, so that it always panics with a fatal error. This is the kind of thing that, if done in a running environment, would break things almost immediately.

But, the compiler is slightly different.... it isolates the program being compiled from the compiler itself. This is done to keep redefinitions that might break the currently running compiler from doing just that. Redefinitions by the compiled program are only supposed to be visible to the compiled program. Since the above program never itself invokes values, it should never hit the call to %panic... except that it does.

What's happening here lies in the processing of the second definition. The definition itself is transformed a couple times by macroexpansion, first to this:

(%define test (named-lambda test (x) (+ x 1)))

And then, basically, to this:

(%define test (%lambda ((name . test) (lambda-list x)) (x) (+ x 1)))

The second macroexpansion step is the step that looks for optional arguments, and the internal function that parses lambda lists for optional arguments returns three values using values. This invocation of values happens in the environment of the program being compiled, so it hits the new %panic-invoking definition and the whole show grinds to a halt. The 'easy' fix, ensuring that macro expansion is isolated from potentially harmful redefinitions, won't work. Macro expansion has to happen in the user environment, so that macros can see function definitions that they might rely upon.

I don't have a unit test for the user/compiler seperation logic, so I thought when I started this blog post I was going to say something like: 'look, something else fundamentally broken, and without a test case'. That's interesting, but if you need convincing to write unit tests, you're probably already lost. What I actually learned while researching this post is a bit more subtle: it's a fundamental problem, but it's more about the design than the code itself. While the design I have for user/compiler seperation seems to work most of the time, it's not adequate to solve this kind of problem. I'm not yet exactly sure what the solution is, but it won't necessarily involve a missing unit test.

Older Articles...