Mike Schaeffer's Blog

Articles with tag: tech
March 5, 2022

Over most of the ten years I've been using git, I've been a strong proponent of merging over rebasing. It seemed more honest to avoid rewriting commits and more likely to produce a complete history. There are also problems that arise when you rewrite shared history, and you can avoid those entirely if you just never rewrite history at all. While all of this is true, the hidden costs of the approach came to play an increasing role in my thinking, and these days, I essentially avoid merge entirely. The result has been an easier workflow, with a more useful history of more coherent commits.

History tracking in a tool like git serves a few development purposes, some tactical and some strategic. Tactically speaking, it's nice to be able to have confidence that you can always reset to a particular state of the codebase, no matter how badly you've screwed it up. It's easier to make "risky" changes to code when you know that you're a split second away from your last known good state. Further, git remotes give you easy access to a form of off site backup and tags give you the ability to label released. Not only does the history in a tool like git make it easier to get to your last known good state during development, it also makes it easier to get back to the version you released last month before your dog destroyed your laptop.

At a strategic level, history tracking can give other longer term benefits. With a little effort, it's an excellent way to document the how and way your code evolves over time. Correctly done (and with an IDE), a good version history gives developers immediate access to the origin of each line of code, along with an explanation of how and why it got there. Of course, it takes effort to get there. Your history can easily devolve into a bunch of "WIP" messages in a randomly associated stream of commits. Like everything else in life worth doing, it takes effort to ensure that you actually have a commit history that can live up to its strategic value.

This starts with a commit history that people bother to read, and like everthing else, it takes effort to produce something worth reading. For people to bother reading your commit history, they need to believe that it's worth the time spent to do so. For that to happen, enough effort needs to have been spent assembling the history that it's possible to understand what's being said. This is where the notion of a complete history runs into trouble. Just like historians curate facts into readable narratives, it is our responsibility as developers to take some time to curate our projects' change history. At least if we expect them to be read. My argument for rebasing over merging boils down to the fact that rebase/squash makes it easier to do this curation and produce a history that has these useful properties.

For a commit to be useful in the future as a point of documentation, it needs to contain a coherent unit of work. git thinks in terms of commits, so it's important that you also think in terms of commits. Being able to trust that a single commit contains a complete single change is usetul both from the point of view of interpreting a history, and also from the point of view of using git to manipulate the history. It's easier to cherry-pick one commit with a useful change than it is three commits, each with a part of that one change.

Another way of putting this is that nobody cares about the history of how you developed a given feature. Imagine adding a field to a screen. You make a back end change in one commit, a front end change in the next, and then submit them both in one branch as a PR. A year after, does it really matter to you or to anybody else that you modified the back end first and then the front end? The two commits are just noise in the history. They document a state that never existed in anything like a production environment.

These two commits also introduce a certain degree of ongoing risk. Maybe you're trying to backport the added field into an earlier maintenance release of your software. What happens if you cherry-pick just one of the two commits into the maintenance release? Most likely, that results in a wholly invalid state that you may or may not detect in testing. Sure, the two commits honestly documented the history, but there's a cost. You lose documentation of the fact that both the front and back end changes are necessary parts of a single whole.

Given this argument for squashing, or curating, commits into useful atomic units, development branches largely reduce down to single commits. You may have a sequence of commits during development to personally track your work, but by the time you merge, you've squashed it down to one atomic commit describing one useful change. This simplifies your history directly, but it also makes it easier to rebase your evelopment branch. Rebasing a branch with a single commit avoids introducing historical states that "never existed". The single commit also dramatically simplifies the process of merge conflict resolution. Rebase a branch with 10 commits, and you may have 10 sets of merge conflicts to resolve. Do you really care about the first nine? Will you really go back to those commits and verify that they still work post-rebase? If you don't, you're just dumping garbage in your commit log that might not even compile, much less run.

I'll close with the thought that this approach also lends itself to better commit messages. If there are fewer commits, there are fewer commit messages to write. With fewer commit messages to write, you can take more time on each to write something useful. It's also easier to write commit messages when your commits are self-contained atomic units. Squashing and curating commits is useful by itself in that it leads to a cleaner history, but it also leads to more opportunities to produce good and useful commit messages. It points in the direction of a virtuous cycle where positive changes drive other positive changes.

Tags:gittech
January 14, 2022

This image has been circulating on LinkedIn as a tongue and cheek example of a miminum viable product.

Of course, at least one of the responses was that it's not an MVP without some extras. It needs 24/7 monitoring or a video camera with a motion alarm. It needs to detect quakes that occur off hours or when you're otherwise away from the detector. The trouble with this statement is the same as with the initial claimed MVP status of this design - both claims make assumptions about requirements. The initial claim assumes you're okay missing quakes when you're not around and the second assumes you really do need to know. To identify an MVP, you need to understand what it means to be viable. You need to understand the goals and requirements of your stakeholders and user community.

Personally, I'm sympathetic to the initial claim that two googly eyes stuck on a shet construction paper might actually be a viable earthquake detector. As a Texan transplant to the Northeast, I'd never experienced an earthquake until the 2011 Virginia earthquake rattled the walls of my suburban Philly office. Not having any real idea what was going on, my co-workers and I walked over to a wall of windows to figure it out. Nothing bad happened, but it wasn't a smart move, and exactly the sort of thing a wall mounted earthquake detector might have helped avoid. The product doesn't do much, but it does do something, and that might well be enough that it's viable.

This viability, though, is contingent on the fact that there was no need to know about earthquakes that occurred off-hours. Add that requirement in, and more capability is needed. The power of the MVP is that it forces you to develop a better understanding of what it is that you're trying to accomplish. Getting to an MVP is less about the product and more about the requirements that drive the creation of that product.

In a field like technology, where practicioners are often attracted to the technology itself, the distinction between what is truly required and what is not can be easy to miss. Personally, I came into this field because I like building things. It's fun and rewarding to turn an idea into a working system. The trouble with the MVP from this point of view is that defining a truly minimum product may easily eliminate the need to build something cool. The answer may well be that nNo, you don't get to build the video detection system, because you don't need it and your time is better spent elsewhere. The notion of the MVP inherently pulls you away from the act the build and forces you to to consider that there may be no immediate value in the thing you aim to build.

One of my first consulting engagments was years ago, for a bank building out a power trading system. They wanted to enter the business to hedge other trades, and the lack of a trading system to enforce controls limits was the reason they couldn't. Contrary to the advice of my team's leadership, they initially decided to scratch build a trading system in Java. There were two parts of this experience that spoke to the idea of understanding requirements and the scope of the minimum viable product.

The first case can be boiled down to the phrase 'training issue'. Coming from a background of packaged software development, my instincts at the time were competely aligned around building software that helps avoid user error. In mass market software, you can't train all of your users, so the software has to fill the gap. There's a higher standard for viability in that the software is required to do more and be more reliable.

This trading platform was different in that it was in-house software with a user base known that numbered in the dozens. With a user base that well and known small, it's feasable to just train everybody to avoid bugs. A crashing, high severity bug that might block a mass market software release might just be addressed by training users to avoid it. This can be much faster, which is important when the software schedule is blocking the business from operating in the first place. The software fix might not actually be required for the product to be viable. This was less perfect software, and more about getting to minimum viability and getting out of the way of a business that needed to run.

The second part of the story is that most of the way through the first phase of the build, the client dropped the custom build entirely. Instead, they'd deploy a commercial trading platform with some light customizations. There was a lot less to build, but it went live much more quickly, particularly in the more complex subsequent phases of the work. It turned out that none of the detailed customizations enabled by the custom build were actually required.

Note that this is not fundementally a negative message. What the MVP lets you do is lower the cost of your build by focusing on what is truly required. In the case of a trading organization, it can get your traders doing their job more quickly. In the case of an earthquake detector, maybe it means you can afford more than just one. Lowering the cost of your product can enable it to be used sooner and in more ways than otherwise.

The concept of an MVP has power because it focuses your attention on the actual requirements you're trying to meet. With that clearer focus, you can achieve lower costs by reducing your scope. This in turn implies you can afford to do more of real value with the limited resources you have available. It's not as much about doing less, as it is about doingo more of value with the resources you have at hand. That's a powerful thing, and something to keep in mind as you decide what you really must build.

Tags:tech
August 12, 2020

Like a lot of engineers, I have a handful of personal projects I keep around for various reasons. Some are useful and some are just for fun, but none of them get the same sort of investment as a funded commercial effort. The consequence of this is that it's all the more important to keep things as simple as possible, to focus the investment where it counts. Part of the way I achieve that is that I've spent some initial time putting together a standard packaging approach. I know, I know - "standard packaging approach" doesn't sound like "fun personal project" - but getting the packaging out of the way makes it easier to focus on the actual fun part - building out functionality. It's for that reason that I've also successfully used variants of this approach on smaller commercial projects too. Hopefully, this will be useful to you too.

Setting the state, the top level view is this:

  • Uberjar packaging of single binaries using Leiningen and a few plugins.
  • Standard scripts and tools for packaging and install.
  • Use of existing Linux mechanisms for service control.
  • A heavy tendancy toward 12 Factor principles.

What this gets you is a good local interactive development story and easy deployment to a server. I've also gotten it to work with Client side code too, using Figwheel

What it doesn't get you is direct support for large numbers of processes or servers. Modern hardware is fast and capable, so you may not have those requirements, but if you do, you'll need something heavier weight, to reduce both management overhead and costs. (In my day job, we've done some amazing things with Kubernetes.)

The example project I'm going to use is the engine for this blog, Rhinowiki. It's useful, but simple enough to be used as a model for a packaging approach. If you're also interested in strategies for managing apps with read/write persistance (SQL) and rich client code, I have a couple other programs packaged this way with those features. Even with these, the essentials of the packaging strategy are exactly the same as what I describe here:

Everything begins with a traditional project.clj, and the project can be started locally with the usual lein run.

Once running, main immediately writes a herald log message at info level:

(defn -main [& args]
  (log/info "Starting Rhinowiki" (get-version))
  (let [config (load-config)]
    (log/debug "config" config)
    (webserver/start (:http-port config)
                     (blog/blog-routes (blog/blog-init config)))
    (log/info "end run.")))

This immediately lets you know the process has started, logs are working, and which version of the code is running. These are usually the first things verified after an install, so it's good to ensure they happen early on. This is particularly useful for software that's not interactive or running on slow hardware. (I've run some of this code on Raspberry Pi hardware that takes ten or so seconds to get to the startup herald.)

The way the version is acquired is interesting too. The call to get-version is really a macro invocation and not a function call.

(defmacro get-version []
  ;; Capture compile-time property definition from Lein
  (System/getProperty "rhinowiki.version"))

Because macros are evaluated at compile time, the macroexpansion of get-version has access to JVM properties defined at build time by Leiningen.

The next step is to pull in configuration settings using Anatoly Polinsky's https://github.com/tolitius/cprop library. cprop can do more than what I use it for here, but here, I use it to load a single EDN config file. cprop lets the name of that file be identified at startup via a system proprety, making it possible to define a file for each server, as well as a local config file specified in: project.clj.

:jvm-opts ["-Dconf=local-config.edn"]

I've also found it useful to minimize the number and power of configuration settings. Every setting that changes is a risk that something will break when you promote code. Every setting that doesn't change is a risk of introducing a bug in the settings code.

I also dump the configugration to a log very early in the startup process.

(log/debug "config" config)

Given the importance of configuration settings, it's occasionally important to be able to inspect the settings in use at run-time. However, this log is written at debug level, so it doesn't normally print. This reduces the risk of accidentally revealing secret keys in the log stream. Depending on the importance of those keys, there is also much more you can do to protect them, if preventing the risk is worth the effort.

After all that's done, main transfers control over to the actual application:

(webserver/start (:http-port config)
                 (blog/blog-routes (blog/blog-init config)))

With a configurable application running, the next step is to get it packaged in a way that lets us predictably install it elsewhere. The strategy here is a two step approach: build the code as an uberjar and include the uberjar in a self-contained .tar.gz as an installation pacakge.

  • The installer package contains everything needed to install the software (the one exception being the JVM itself).
  • The package name includes the version number of the software: rhinowiki-0.3.3.tar.gz.
  • Files in the installation package all have a prefix (rhinowiki-install, in this case) to confine the installation files to a single directory when installing. This is to make it easy to avoid crosstalk between multiple installers and delete installation directories after you're done with an installation.
  • There is an idempotent installation script (install.sh) at the root of the package. Running this script either creates or updates an installation.
  • The software is installed as a Linux service.

The net result of this packaging in an installation/upgrade process that works like this:

tar xzvf rhinowiki-0.3.3.tar.gz
cd rhinowiki-install
sudo service rhinowiki stop
sudo ./install.sh
sudo service rhinowiki start

To get to this point, I use the Leiningen release task and the lein-tar plugin, both originally by Phil Hagelberg. There's a wrapper script, but the essential command is lein clean && lein release $RELEASE_LEVEL. This instructs Leiningen to execute a series of tasks listed in the release-tasks key in project.clj.

I've had to modify Leiningen's default list of release tasks, in two ways: I skip signing of tagged releases in git, and I invoke lein-tar rather than deploy. However, the full task list needs to be [completely restated in project.clj](https://github.com/mschaef/rhinowiki/blob/master/project.clj#L42), so it's a lengthy setting.

:release-tasks [["vcs" "assert-committed"]
                ["change" "version" "leiningen.release/bump-version" "release"]
                ["vcs" "commit"]
                ["vcs" "tag" "--no-sign" ]
                ["tar"]
                ["change" "version" "leiningen.release/bump-version"]
                ["vcs" "commit"]
                ["vcs" "push"]]

The configuration for lein-tar is more straightforward - include the plugin, and specify a few options. The options request that the packaged output be written in the project root, include an uberjar, and extract into an install directory rather than just CWD.

:plugins [[lein-ring "0.9.7"]
          [lein-tar "3.3.0"]]

;; ...

:tar {:uberjar true
      :format :tar-gz
      :output-dir "."
      :leading-path "rhinowiki-install"}

Give the uberjar a specific fixed name:

:uberjar-name "rhinowiki-standalone.jar"

And populate it with a few files additional to the uberjar itself - lein-tar accepts these files in pkg/ at the root of the project directory hierarchy. These files include everything else needed to install the application - a configuration map for cprop, an install script, a service script, and log configuration.

The install script is the last part of the process. It's an idempotent script that, when run on a server as sudo, guarantees that the application is installed. It sets up users and groups, copies files from the package to wherever they belong, and uses update-rc.d to ensure that the service scripts are correctly installed.

This breaks down the packaging and installation process to the following:

  • ./package.sh
  • scp package tarball to server and ssh in
  • Extract the package - tar xzvf rhinowiki-0.3.3.tar.gz
  • Change into the expanded package directory - cd rhinowiki-install
  • Stop any existing instances of the service - sudo service rhinowiki stop
  • Run the install script - sudo ./install.sh
  • (Re)Start the service - sudo service rhinowiki start

At this point, I've sketched out the approach end to end, and I hope it's evident that this can be used in fairly simple scenarios. Before I close, let me also talk about a few sharp edges to be aware of. Like every other engineering approach, this packaging strategy has tradeoffs, and some of these tradeoffs require specific compromises.

The first is that this approach requires dependencies (notably the JVM) to be manually installed on target servers. For smaller environments, this can be acceptable, for larger numbers of target VM's, almost definately not.

The second is that there's nothing about persistance in this approach. It either needs to be managed externally, or the entire persistance story needs to be internal to the deployed uberjar. This is why I wrote sql-file, which provides a built in SQL database with schema migration support. Another approach is just to handle it altogether externally, which is what I do for Rhinowiki. The Rhinowiki store is a git repository, and it's managed out of band with respect to the deployment of Rhinowiki itself.

But these are both specific problems that can be managed for smaller applications. Often times, it's worth the costs associated with these problems, to gain the benefits of reducing the number of software components and moving pieces. If you're in a situation like that, I hope you give this approach a try and find it useful. Please let me know if you do.

September 18, 2019

Amazingly enough, git is now 14 years old. What started out as Linus Torvald's 'three day' replacement for BitKeeper is now dominant enough in its domain that even the Windows Kernel is hosted on git. (If you really are amazed by the age of git, that last bit might be even more amazing.) In any event, I also use git and have done so for close to ten years. Along with a compiler and an editor, I'd consider it one of the three essential development tools. That experience has left me with a set of preconceived notions about how git should be used and some tips and tricks on how to use it better. I've been meaning to get it all into a single place for a while, and this is the attempt.

This isn't really the place to start learning git (that would be a tutorial). This is for people that have used git for a while, understand the basic mechanics, and want to look for ways to elevate their game and streamline their workflow.

The Underlying Data Model

git is built on a distinct data structure, and the implications of this structure permeate the user experience.

Understanding the underlying data model is important, and not that complicated from a computer science perspective.

  • Every revision of a source tree managed by git can be considered a complete snapshot of every source file. This is called a commit.
  • Every commit has a name (or address), which is a hash of the entire contents of the commit. These names are not user friendly (They look like d674bf514fc5e8301740534efa42a28ca4466afd), but they're essentially guaranteed to be unique.
  • If two commits have different contents, they also have different hashes. A hash is enough to completely identify a state of a source tree.
  • Because hashes are a pain to work with, git also has refs. Refs are user friendly symbolic names (master, fix-bug-branch) that can each point to a commit by hash.
  • Commits can't be mutated, because any change to their contents would change their name/hash. Refs are where git allows mutations to occur.
  • If you think of a ref as a variable that contains a hash and points to a commit, you're not far off.
  • Commits can themselves refer to other commits - Each commit can contain references to zero or more predecessors. These backlinks what allow git to construct a history of commits (and therefore a history of a source code tree).
  • The 'first commit' has zero predecessors, a merge commit has two or more.

The result of all this is that the core data structure is a directed acyclic graph, covered nicely in this post by Tommi Virtanen.

Tags:gittech
January 24, 2019

Despite several good online resources, it's not necessarily obvious how friend's wrap-authorize interacts with Compojure routing.

This set of routes handles /4 incorrectly:

(defroutes app-routes
  (GET "/1" [] (site-page 1))
  (GET "/2" [] (site-page 2))
  (friend/wrap-authorize (GET "/3" [] (site-page 3)) #{::user})
  (GET "/4" [] (site-page 4)))

Any attempt to route to /4 for a user that doesn't have the ::user role will fail with the same error you would expect to (and do) get from an unauthorized attempt to route to /3. The reason this happens is that Compojure considers the four routes in the sequence in which they are listed and wrap-authorize works by throw-ing out if there is an authorization error (and aborting the routing entirely).

So, even though the code looks like the authorization check is associated with /3, it's really associated with the point in evaluation after /2 is considered, but before /3 or /4. So for an unauthorized user of /3, Compojure never considers either the the /3 or /4 routes. /4 (and anything that might follow it) is hidden behind the same security as /3.

This is what's meant when the documentation says to do the authorization check after the routing and not before. Let the route decide if the authorization check gets run and then your other routes won't be impacted by authorization checks that don't apply.

What that looks like in code is this (with the friend/authorize check inside the body of the route):

(defroutes app-routes
  (GET "/1" [] (site-page 1))
  (GET "/2" [] (site-page 2))
  (GET "/3" [] (friend/authorize #{::user} (site-page 3)))
  (GET "/4" [] (site-page 4)))

The documentation does mention the use of context to help solve this problem. Where that plays a role is when a set of routes need to be hidden behind the same authorization check. But the essential point is to check and enforce authorization only after you know you need to do it.

December 21, 2018

I've lately run across several interesting small computer history sites. If you have any interest in small computing's emergence from 1980 to 1990 or so, these are worth a look.

In no particular order:

  • OS/2 Museum - Covers OS/2, but also gets into detail around PC architecture. Among other interesting bits, this is just one of several articles on A20 gate handling, and here's something on the IBM 8514/A.
  • DTACK Grounded - A newsletter written to promote Hal Hardbergh's side business of attached Motorola 68000 processor boards. Mostly interesting for his commentary on then-crurent events leading up to the emergence and use of 32-bit microprocessors. Notably, this was written at the time of Intel's pivot from the iAPX 432 to the 80386. The commentary on the relative unreliability of DRAM is amusing too.
  • CRPG Addict - Not sure how he has the time, but the author of this blog has set himself the challenge of playing through and documenting every early CRPG game from the late 70's and well into the 90's.
  • The Digital Antiquarian - Critical commentary on early small computer gaming. Lots of details about how games came to be made and their content.
  • Retrocomputing Stack Exchange site - This is currently more like Netflix than anything else. Coverage is spotty, but that doesn't mean you can't find something interesting to read.
August 3, 2018

It's been a long time coming, but I've finally replaced blosxom with a custom CMS I've been writing called Rhinowiki. More than a serious attempt at a CMS, this is mainly a fun little side project to write some Clojure, experiment a bit with JGit, and hopefully make it easier to implement a few of my longer term plans that might have been tricky to do in straight Perl.

Full source in the link above, a high level summary here:

  • Everything is in Clojure.
  • Backend format is Markdown as interpreted by markdown-clj.
  • Source code is highlighted using highlight.js.
  • Markdown rendering is done entirely on the server, with syntax highlighting on the client. (I'm looking into Nashorn to run highlight.js server side too, but don't know if that's possible within my time constraints.)
  • Back end storage is managed using and retrieved via JGit.
  • All requests are served out of memory.
  • There's a hand rolled (and conformant) Atom feed.
  • Also RSS 2.0.
April 29, 2015

Since my last post, I dropped by an Apple Store to take a look at the 2015 MacBook. It is difficult to overstate how startlingly small the new machine is in person. I may be biased by the internal specifications, but the impression is much more 'big tablet' than 'small laptop'. The other standout feature was the touchpad. It continues Apple's tradition of high-quality touchpad implementations, removes the mechanicical switch and hinge, and adds force sensititivy and haptic feedback. The mechanical simplifications alone are a worthwhile improvement.

I also spent some time typing on the keyboard. It's as shallow as you'd think, but the keys are very direct have a positive feel. There's none of the subtle rattling found on most small keyboards and it registered every keypress. I'm not completely convinced yet, but it at least seems possible that this type of keyboard could become the preferred keyboard for some typists.

The performance of the machine is also a point of interest. Even the lightly loaded demo machine on the showroom floor had a few hiccups paging the display around from one virtual desktop to the next. Maybe it's nothing, but it does make me wonder if the machine can keep up with daily use, particuarly after a few OSX updates have been released. (For me, I think it'd be fine, but I spend most my time in Terminal, Emacs, and Safari, none of which are exactly heavy-hitters.)

Tags:tech
April 13, 2015

A few more-or-less random observations about our industry, and various recent product releases:

  • Google Nexus 6 - Nice phone, but they lost one of the biggest competitive differentiators of the older 4 and 5 models: the price, and they did it with a more-niche phablet-sized device. Seems like a big mistake.
  • 2015 MacBook - Assuming they didn't screw up the keyboard too badly, Apple has something special here. Powerful enough to do what most people need, a Retina display, and highly portable. The single port seems like genius, in that it eliminates the need/desire for devices like the Henge Dock. It'll also be interesting to see how many of the new MacBook design elements make it to the Pro and Air.
  • Apple Watch - Another display to serve as a distraction, another battery to keep charged, and another device to maintain and upgrade? To me, the marginal benefit of carrying and maintaining the watch does not sufficiently offset the marginal costs. If it's more about style than function, then Apple has to make a compelling argument that the Edition watch is really worth 50 times the basic watch, despite the fact it's stylistically so similar.

Tags:tech
March 26, 2014

Update 2019-01-17: KSM recently redesigned their website in a way that removes the original blog. Because of this, I've taken some of what I wrote then for KSM and re-hosted it here. Thanks are due both to KSM Technology Partners for allowing me to do this and to the Wayback Machine for retaining the content. All the links below are updated to reflect the articles' new locations.


Sorry for the radio silence, but recently I've been focusing my writing time on the KSM Techology Partners Blog. My writing there is still technical in nature, but it tends to be more heavily focused on the JVM. If you're interested, here are a few of what I consider to be the highlights.

In mid-2013, I started out writing about how to use Runnable to explictly enforce dynamic extent in Java. In a nutshell, this is a way to implement try...with...resources in versions of Java that don't have it built in to the language. I then used the dynamic extent technique to build a ThreadLocal that plays nicely with thread pools. This is useful because thread pools require an understanding of which thread you're running on, which thread pooling techniques can abstract away.

Later in the year, I focused more on Clojure, starting off with a quick bit on the relationship of lexical closures to Java inner classes. I also wrote about a particular kind of stack overflow exception that can happen with lazy sequences. Lazy sequences can nicely remove the need to use recursion while traversing their length, but each time two unrealized lazy sequences are combined, it adds to the recursive depth required to compute the first element. For me, this stack overflow was a difficult error to diagnose, because it seemed so counter-intuitive.

I'm also in the middle of a series of posts that relate the GoF command pattern to functional programming. The posts start off with Java, but will ultimately describe a Clojure implementation that compiles a stack based expression language into optimized Java bytecode. If you'd like to play with the code, it's on github.

Older Articles...