Mike Schaeffer's Blog

Articles with tag: clojure
August 12, 2020

Like a lot of engineers, I have a handful of personal projects I keep around for various reasons. Some are useful and some are just for fun, but none of them get the same sort of investment as a funded commercial effort. The consequence of this is that it's all the more important to keep things as simple as possible, to focus the investment where it counts. Part of the way I achieve that is that I've spent some initial time putting together a standard packaging approach. I know, I know - "standard packaging approach" doesn't sound like "fun personal project" - but getting the packaging out of the way makes it easier to focus on the actual fun part - building out functionality. It's for that reason that I've also successfully used variants of this approach on smaller commercial projects too. Hopefully, this will be useful to you too.

Setting the state, the top level view is this:

  • Uberjar packaging of single binaries using Leiningen and a few plugins.
  • Standard scripts and tools for packaging and install.
  • Use of existing Linux mechanisms for service control.
  • A heavy tendancy toward 12 Factor principles.

What this gets you is a good local interactive development story and easy deployment to a server. I've also gotten it to work with Client side code too, using Figwheel

What it doesn't get you is direct support for large numbers of processes or servers. Modern hardware is fast and capable, so you may not have those requirements, but if you do, you'll need something heavier weight, to reduce both management overhead and costs. (In my day job, we've done some amazing things with Kubernetes.)

The example project I'm going to use is the engine for this blog, Rhinowiki. It's useful, but simple enough to be used as a model for a packaging approach. If you're also interested in strategies for managing apps with read/write persistance (SQL) and rich client code, I have a couple other programs packaged this way with those features. Even with these, the essentials of the packaging strategy are exactly the same as what I describe here:

Everything begins with a traditional project.clj, and the project can be started locally with the usual lein run.

Once running, main immediately writes a herald log message at info level:

(defn -main [& args]
  (log/info "Starting Rhinowiki" (get-version))
  (let [config (load-config)]
    (log/debug "config" config)
    (webserver/start (:http-port config)
                     (blog/blog-routes (blog/blog-init config)))
    (log/info "end run.")))

This immediately lets you know the process has started, logs are working, and which version of the code is running. These are usually the first things verified after an install, so it's good to ensure they happen early on. This is particularly useful for software that's not interactive or running on slow hardware. (I've run some of this code on Raspberry Pi hardware that takes ten or so seconds to get to the startup herald.)

The way the version is acquired is interesting too. The call to get-version is really a macro invocation and not a function call.

(defmacro get-version []
  ;; Capture compile-time property definition from Lein
  (System/getProperty "rhinowiki.version"))

Because macros are evaluated at compile time, the macroexpansion of get-version has access to JVM properties defined at build time by Leiningen.

The next step is to pull in configuration settings using Anatoly Polinsky's https://github.com/tolitius/cprop library. cprop can do more than what I use it for here, but here, I use it to load a single EDN config file. cprop lets the name of that file be identified at startup via a system proprety, making it possible to define a file for each server, as well as a local config file specified in: project.clj.

:jvm-opts ["-Dconf=local-config.edn"]

I've also found it useful to minimize the number and power of configuration settings. Every setting that changes is a risk that something will break when you promote code. Every setting that doesn't change is a risk of introducing a bug in the settings code.

I also dump the configugration to a log very early in the startup process.

(log/debug "config" config)

Given the importance of configuration settings, it's occasionally important to be able to inspect the settings in use at run-time. However, this log is written at debug level, so it doesn't normally print. This reduces the risk of accidentally revealing secret keys in the log stream. Depending on the importance of those keys, there is also much more you can do to protect them, if preventing the risk is worth the effort.

After all that's done, main transfers control over to the actual application:

(webserver/start (:http-port config)
                 (blog/blog-routes (blog/blog-init config)))

With a configurable application running, the next step is to get it packaged in a way that lets us predictably install it elsewhere. The strategy here is a two step approach: build the code as an uberjar and include the uberjar in a self-contained .tar.gz as an installation pacakge.

  • The installer package contains everything needed to install the software (the one exception being the JVM itself).
  • The package name includes the version number of the software: rhinowiki-0.3.3.tar.gz.
  • Files in the installation package all have a prefix (rhinowiki-install, in this case) to confine the installation files to a single directory when installing. This is to make it easy to avoid crosstalk between multiple installers and delete installation directories after you're done with an installation.
  • There is an idempotent installation script (install.sh) at the root of the package. Running this script either creates or updates an installation.
  • The software is installed as a Linux service.

The net result of this packaging in an installation/upgrade process that works like this:

tar xzvf rhinowiki-0.3.3.tar.gz
cd rhinowiki-install
sudo service rhinowiki stop
sudo ./install.sh
sudo service rhinowiki start

To get to this point, I use the Leiningen release task and the lein-tar plugin, both originally by Phil Hagelberg. There's a wrapper script, but the essential command is lein clean && lein release $RELEASE_LEVEL. This instructs Leiningen to execute a series of tasks listed in the release-tasks key in project.clj.

I've had to modify Leiningen's default list of release tasks, in two ways: I skip signing of tagged releases in git, and I invoke lein-tar rather than deploy. However, the full task list needs to be [completely restated in project.clj](https://github.com/mschaef/rhinowiki/blob/master/project.clj#L42), so it's a lengthy setting.

:release-tasks [["vcs" "assert-committed"]
                ["change" "version" "leiningen.release/bump-version" "release"]
                ["vcs" "commit"]
                ["vcs" "tag" "--no-sign" ]
                ["tar"]
                ["change" "version" "leiningen.release/bump-version"]
                ["vcs" "commit"]
                ["vcs" "push"]]

The configuration for lein-tar is more straightforward - include the plugin, and specify a few options. The options request that the packaged output be written in the project root, include an uberjar, and extract into an install directory rather than just CWD.

:plugins [[lein-ring "0.9.7"]
          [lein-tar "3.3.0"]]

;; ...

:tar {:uberjar true
      :format :tar-gz
      :output-dir "."
      :leading-path "rhinowiki-install"}

Give the uberjar a specific fixed name:

:uberjar-name "rhinowiki-standalone.jar"

And populate it with a few files additional to the uberjar itself - lein-tar accepts these files in pkg/ at the root of the project directory hierarchy. These files include everything else needed to install the application - a configuration map for cprop, an install script, a service script, and log configuration.

The install script is the last part of the process. It's an idempotent script that, when run on a server as sudo, guarantees that the application is installed. It sets up users and groups, copies files from the package to wherever they belong, and uses update-rc.d to ensure that the service scripts are correctly installed.

This breaks down the packaging and installation process to the following:

  • ./package.sh
  • scp package tarball to server and ssh in
  • Extract the package - tar xzvf rhinowiki-0.3.3.tar.gz
  • Change into the expanded package directory - cd rhinowiki-install
  • Stop any existing instances of the service - sudo service rhinowiki stop
  • Run the install script - sudo ./install.sh
  • (Re)Start the service - sudo service rhinowiki start

At this point, I've sketched out the approach end to end, and I hope it's evident that this can be used in fairly simple scenarios. Before I close, let me also talk about a few sharp edges to be aware of. Like every other engineering approach, this packaging strategy has tradeoffs, and some of these tradeoffs require specific compromises.

The first is that this approach requires dependencies (notably the JVM) to be manually installed on target servers. For smaller environments, this can be acceptable, for larger numbers of target VM's, almost definately not.

The second is that there's nothing about persistance in this approach. It either needs to be managed externally, or the entire persistance story needs to be internal to the deployed uberjar. This is why I wrote sql-file, which provides a built in SQL database with schema migration support. Another approach is just to handle it altogether externally, which is what I do for Rhinowiki. The Rhinowiki store is a git repository, and it's managed out of band with respect to the deployment of Rhinowiki itself.

But these are both specific problems that can be managed for smaller applications. Often times, it's worth the costs associated with these problems, to gain the benefits of reducing the number of software components and moving pieces. If you're in a situation like that, I hope you give this approach a try and find it useful. Please let me know if you do.

January 24, 2019

Despite several good online resources, it's not necessarily obvious how friend's wrap-authorize interacts with Compojure routing.

This set of routes handles /4 incorrectly:

(defroutes app-routes
  (GET "/1" [] (site-page 1))
  (GET "/2" [] (site-page 2))
  (friend/wrap-authorize (GET "/3" [] (site-page 3)) #{::user})
  (GET "/4" [] (site-page 4)))

Any attempt to route to /4 for a user that doesn't have the ::user role will fail with the same error you would expect to (and do) get from an unauthorized attempt to route to /3. The reason this happens is that Compojure considers the four routes in the sequence in which they are listed and wrap-authorize works by throw-ing out if there is an authorization error (and aborting the routing entirely).

So, even though the code looks like the authorization check is associated with /3, it's really associated with the point in evaluation after /2 is considered, but before /3 or /4. So for an unauthorized user of /3, Compojure never considers either the the /3 or /4 routes. /4 (and anything that might follow it) is hidden behind the same security as /3.

This is what's meant when the documentation says to do the authorization check after the routing and not before. Let the route decide if the authorization check gets run and then your other routes won't be impacted by authorization checks that don't apply.

What that looks like in code is this (with the friend/authorize check inside the body of the route):

(defroutes app-routes
  (GET "/1" [] (site-page 1))
  (GET "/2" [] (site-page 2))
  (GET "/3" [] (friend/authorize #{::user} (site-page 3)))
  (GET "/4" [] (site-page 4)))

The documentation does mention the use of context to help solve this problem. Where that plays a role is when a set of routes need to be hidden behind the same authorization check. But the essential point is to check and enforce authorization only after you know you need to do it.

August 3, 2018

It's been a long time coming, but I've finally replaced blosxom with a custom CMS I've been writing called Rhinowiki. More than a serious attempt at a CMS, this is mainly a fun little side project to write some Clojure, experiment a bit with JGit, and hopefully make it easier to implement a few of my longer term plans that might have been tricky to do in straight Perl.

Full source in the link above, a high level summary here:

  • Everything is in Clojure.
  • Backend format is Markdown as interpreted by markdown-clj.
  • Source code is highlighted using highlight.js.
  • Markdown rendering is done entirely on the server, with syntax highlighting on the client. (I'm looking into Nashorn to run highlight.js server side too, but don't know if that's possible within my time constraints.)
  • Back end storage is managed using and retrieved via JGit.
  • All requests are served out of memory.
  • There's a hand rolled (and conformant) Atom feed.
  • Also RSS 2.0.
December 20, 2014

One of the key attributes I look for when writing and reviewing code is that code should express the intent of the developer more than the mechanism used to achieve that intent. In other words, code should read as much as possible as if it were a description of the end goal to be achieved. The mechanism used to achieve that goal is secondary.

Over the years, I’ve found this emphasis improves the quality of a system by making it easier to write correct code. By removing the distraction of the mechanism underneath the code: it’s easier for the author of that code to stay in the mindset of the business process they’re implementing. To see what I mean, consider how hard it would be to query a SQL database if every query was forced to specify the details of each table scan, index lookup, sort, join, and filter. The power of SQL is that it eliminates the mechanism of the query from consideration and lets a developer focus on the logic itself. The computer handles the details. Compilers do the same sort of thing for high level languages: coding in Java means not worrying about register allocation, machine instruction ordering, or the details of free memory reclamation. In the short-term, these abstractions make it easier to think about the problem I’m being paid to solve. Over a longer time scale, the increased distance between the intent and mechanism makes it easier to improve the performance or reliability of a system. Adding an index can transparently change a SQL query plan and Java seamlessly made the jump from an interpreter to a compiler.

One of the unique sources of power in the Lisp family of languages is a combination of features that makes it easier build the abstractions necessary to elevate code from mechanism to intent. The combination of dynamic typing, higher order functions, good data structures, and macros can make it possible to develop abstractions that allow developers to focus more on what matters, the intent of the paying customer, and less on what doesn’t. In this article, I’ll talk about what that looks like for the calculator example and how Clojure brings the tools needed to focus on the intent of the code.

To level set, I’m going to go back to the calculator’s addition command defined in the last installment of this series.:

(fn [ { [x y & more] :stack } ]
   { :stack (cons (+ y x) more)})

Given a stack, this command removes the top two arguments from the stack, adds them, and pushes the result back on top of the stack. This stack:

1 2 5 7

becomes this stack:

3 5 7

While the Clojure addition command is shorter than the Java version, the Clojure version still includes a number of assumptions about the machinery used in the underlying implementation:

  • Calculator state is passed to the command as a map with a key :stack that holds the stack.
  • The input stack can be destructured as a sequence.
  • The output state is represented in a map allocated at the end of the command’s execution.
  • The output stack is a sequence of cons cells and the output of this command is stored in a newly allocated cell.
  • The command has a single point in time at which it begins execution.
  • The command has a single point in time at which it ends execution.
  • The execution of this command cannot overlap with other commands that manipulate the stack.

Truth be told, there isn’t a single item on this list that’s essential to the semantics of our addition command. Particularly in the case where a sequence of commands is linked together to make a composite command, every item on that list might be incorrect. This is because the state of the stack between elements of a composite command might not ever be directly visible to the user. Keeping that in mind, what would be nice is some kind of shorthand notation for stack operations that hides these implementation details. This type of notation would make it possible to express the intent of a command without the machinery. Fortunately, the programming language Forth has a stack effect notation often used in comments that might do the trick.

Forth is an interesting and unique language with a heavy dependency on stack manipulation. One of the coding conventions sometimes used in Forth development is that every ‘composite command’ (‘word’, in Forth terminology) is associated with a comment that shows a picture of the stack at the beginning and end of the command’s execution. For addition, such a comment might look like this:

: add ( x y -- x+y ) .... ;

This comment shows that the command takes two arguments off the top of the stack, ‘x’ and ‘x’, and returns a single value ‘x+y’. None of the details regarding how the stack is implemented are included in the comment. The only thing that’s left in the comment are the semantics of the operation. This is more or less perfect for defining a calculator command. Mapped into Clojure code, it might look something like this:

(stack-op [x y] [(+ x y)])

This Clojure form indicates a stack operation and has stack pictures that show the top of the stack both before and after the evaluation of the command. The notation is short, yes, but it’s particularly useful because it doesn’t overspecify the command by including the details of the mechanics. All that’s left in this notation is the intent of the command.

Of course, the mechanics of the command still need to be there for the command to work. The magic of macros in Clojure is that they make it easier to bridge the gap from the notation you want to the mechanism you need. Fortunately, all it takes in this case is a short three line macro that tells Clojure how to reconstitute a function definition from our new stack-op notation:

(defmacro stack-op [ before after ]
  `(fn [ { [ ~@before & more# ] :stack } ]
     { :stack (concat ~after more# ) } ) )

Squint your eyes, and the structure of the original Clojure add command function should be visible within the macro definition. That’s because this macro really serves as a kind of IDE snippet hosted by the compiler, providing blanks to be filled in with the macro parameters. Multiple calls to a macro are like expanding the same snippet multiple times with different parameters. The only difference is that when you expand a snippet within an IDE, it only helps you when you’re entering the code into the editor; the relationship between a block of code in the editor and the snippet from which it came is immediately lost. Macros preserve that relationship, and thanks to Lisp’s syntax, do so in a way that avoids some of the worst issues that plague C macros. This gives us both the more ‘intentional’ notation, as well as the ability to later change the underlying implementation in more profound ways.

Before I close the post, I should mention that there are ways to approach this type of design in other languages. In C, the preprocessor provides access to compile-time macro expansion, and for Java and C#, code generation techniques are well accepted. For JavaScript, any of the higher level languages that compile into JavaScript can be viewed as particularly heavy-weight forms of this technique. Where Lisp and Clojure shine is that they make it easy by building it into the language directly. This post only scratches the surface, but the next post will continue the theme by exploring how we can improve the calculator now that we have a syntax that more accurately expresses our intent.

December 15, 2014

In my last post, I started porting the RPN calculator example from Java to Clojure, moving a functional program into a functional language. In this post, I finish the work and show how the Clojure calculator models both state and calculator commands.

Going back to the last post, the Clojure version of the Read-Eval-Print-Loop (REPL) has the following code.

(defn main []
  (loop [ state (make-initial-state) ]
    (let [command (parse-command-string (read-command-string state))]
      (if-let [new-state (apply-command state command)]
        (recur new-state)
        nil))))

As with the Java REPL, this function continually loops, gathering commands to evaluate, evaluating them against the current state, and printing the state after each command is executed. The REPL function controls the lifecycle of the calculator state from beginning to end, starting by invoking the state constructor function:

(defn make-initial-state []
  {
   :stack ()
   :regs (vec (take 20 (repeat 0)))
   })

Like main, the empty brackets signify that this is a 0-arity function, a function that takes 0 arguments. Looking back at the call site, this is why the name of the function appears by itself within the parenthesis:

(make-initial-state)

If the function required arguments, they’d be to the right of the function name at the call site:

(make-initial-state <arg-0> ... <arg-n>)

This is the way that Lisp like languages represent function and macro call sites. Every function or macro call is syntactically a list delimited by parenthesis. The first element of that list identifies the function or macro being invoked, and the arguments to that function or macro are in the second list position and beyond. This is the rule, and it is essentially universal, even including the syntax used to define functions. In this form, defn is the name of the function definition macro, and it takes the function name, argument list, and body as arguments:

(defn make-initial-state []
  {
   :stack ()
   :regs (vec (take 20 (repeat 0)))
   })

For this function, the body of the function is a single statement, a literal for a two element hash map. In Clojure, whenever run time control flow passes to an object literal, a new instance of that literal is constructed and populated.

{
 :stack ()
 :regs (vec (take 20 (repeat 0)))
 }
 

This one statement is thus the rough equivalent of calling a constructor and then a series of calls to populate the new object. Paraphrasing into faux-Java:

Mapping m = new Mapping();
 
m.put("stack", Sequence.EMPTY);
m.put("regs", vec(take(20, repeat(0)));

Once the state object is constructed, the first thing the REPL has to do is prompt the user for a command. The function to read a new command takes a state as an argument. This is so it can print out the state prior to prompting the user and reading the command string:

(defn read-command-string [ state ]
  (show-state state)
  (print "> ")
  (flush)
  (.readLine *in*))

This code should be fairly understandable, but the last line is worthy of an explicit comment. *in* is a reference to the usual java.lang.System.in, and the leading dot is Clojure syntax for invoking a method on that object. That last line is almost exactly equivalent to this Java code:

System.in.readLine();

There’s more use of Clojure/Java interoperability in the command parser:

(defn parse-command-string [ str ]
  (make-composite-command
   (map parse-single-command (.split (.trim str) "\\s+"))))

The Java-interop part is in this bit here:

(.split (.trim str) "\\s+")

Translating into Java:

str.trim().split("\\s+")

Because str is a java.lang.String, all the usual string methods are available. This makes it easy to use standard Java facilities to trim the leading and trailing white space from a string and then split it into space-delimited tokens. Going back to part 2 of this series, this is the original algorithm I used to handle multiple calculator commands entered at the same prompt.

The rest of parse-command-string also follows the original part-2 design: each token is parsed individually as a command, and the list of all commands is then assembled into a single composite command. The difference is that there’s less notation in the Clojure version, mainly due to the use of the higher-order function map. map applies a function to each element of an input sequence and returns a new sequence containing the results. This one function encapsulates a loop, two variable declarations, a constructor call, and the method call needed to populate the output sequence:

List<Command> subCmds = new LinkedList<Command>();
  
for (String subCmdStr : cmdStr.split("\\s+"))
    subCmds.add(parseSingleCommand(subCmdStr));

What’s nice about this is that eliminating the code eliminates the possibility of making certain kinds of errors. It also makes the code more about the intent of the logic, and less about the mechanism used to achieve that intent. This opens up optimization opportunities like Clojure’s lazy evaluation of mapping functions.

The final bit of new notation I’d like to point out is the way the Clojure version represents commands. Commands in the Clojure version of the calculator are functions on calculator state, represented as Clojure functions:

(fn [ { [x y & more] :stack } ]
    { :stack (cons (+ y x) more)})

This function, the addition command, accepts a state object and uses argument list destructuring to extract out the stack portion of the state. It then assembles a new state object that contains a version of the stack that contains the sum of the top two previous stack elements. Rather than focusing on the machinery used to gather and manipulate stack arguments, Clojure’s notation makes it easier for the code behind the command to match the intent. As before, this helps reduce the chance for errors, and it also opens up new optimization opportunities.

(If you’ve read closely and are wondering what happened to regs, commands in the Clojure version of the calculator can actually return a partial state. If a command doesn’t return a state element, then the previous value for that state element is used in the next state. Because add doesn’t change regs, it doesn’t bother to return it.)

December 2, 2014

So far in this series, I’ve taken a basic calculator written in Java and transformed it from a command-oriented procedural design into a more functional style. In some ways, this has made for simpler code: calculator state is better encapsulated in value objects, and explicit control flow structures have been replaced with domain-specific higher order functions. Unfortunately, Java wasn’t designed to be a functional language, so the notation has become progressively more cumbersome and lengthy. 151 lines of idiomatic Java is now 327 lines of inner classes, custom iterators, and inverted control flow patterns. It should be difficult to get this kind of code through a serious Java code review.

Despite this difficulty, there is value in the functional design approach; What we need is a new notation. To show what I mean, this article switches gears and ports the latest version of the calculator from Java to Clojure. This reduces the size of the code from 327 lines down to a more reasonable-for-the-functionality 82. More importantly, the new notation opens up new opportunities for better expressiveness and further optimization. Building on the Clojure port, I’ll ultimately build out a version of the calculator that uses eval for legitimate purposes, and compiles calculator macros and can run them almost as fast as code written directly in Java.

The first step to understanding the Clojure port is to understand how it’s built from source. For the Java versions of the code, I used Apache Maven to automate the build process. Maven provides standard access to dependencies, a standard project directory structure, and a standard set of verbs for building, installing, and running the project. In the Clojure world, the equivalent tool is called Leiningen. It provides the same structure and services for a Clojure project as Maven does for a Java project, including the ability to pull in Maven dependencies. While it’s possible to build Clojure code with Maven, Leiningen is a better choice for new work, largely because it’s more well integrated into the Clojure ecosystem out of the box.

For the RPN calculator project, the project definition file looks like this:

(defproject rpn-calc "0.1.0-SNAPSHOT"
  :description "KSM Partners - RPN Calculator"
 
  :license {:name "Eclipse Public License"
            :url "http://www.eclipse.org/legal/epl-v10.html"}
 
  :dependencies [[org.clojure/clojure "1.5.0"]]
 
  :repl-options {
                 :host "0.0.0.0"
                 :port 53095
                 }
 
  :main rpn-calc.main)

This file contains the same sorts of information as the equivalent POM file for the Java version of the project. (In fact, Leiningen provides way to take a Leiningen project definition file and translate it into an equivalent Maven pom.xml.) Rather than XML, the Leiningen project file is written in an S-expression, and it contains a few additional settings. Notably, the last line is the name of the project’s entry point: the function that gets called when Leiningen runs the project. For this project, rpn-calc.main is a function that ultimately delegates to one of three entry points for the three Clojure versions of the calculator. For this post, the implementation specific entry point looks like this:

(defn main []
  (loop [ state (make-initial-state) ]
    (let [command (parse-command-string (read-command-string state))]
      (if-let [new-state (apply-command state command)]
        (recur new-state)
        nil))))
public void main() throws Exception
{
    State state = new State();
 
    while(state != null) {
        System.out.println();
        showStack(state);
        System.out.print("> ");
 
        String cmdLine = System.console().readLine();
 
        if (cmdLine == null)
            break;
 
        Command cmd = parseCommandString(cmdLine);
 
        state = cmd.execute(state);
    }
}

Unwrapping the code, both function definitions include construction of the initial state and then the body of the Read-Eval-Print-Loop. These two lines of code include both elements.

(loop [ state (make-initial-state) ]
    ...
    (recur new-state))

The loop form, surrounded by parentheses, is the body of the loop. Any loop iteration variables are defined and initialized within the bracketed form at the beginning of the loop. In this case, a variable state is initialized to hold the value returned by a call to make-initial-state. Within the body of the loop, there can be one or more recur forms that jump back to the beginning of the loop and provide new values for all the iteration variables defined for the loop. This gives a bit more flexibility than Java’s while loop: there can be multiple jumps to the beginning of a loop.

The body of this loop form is entirely composed of a let form. A let form establishes local variable bindings over a block of source code and provides initial values for those variables. If this sounds a lot like a loop form without the looping, that’s exactly what it is.

(let [command (parse-command-string (read-command-string state))]
   ...)

This code calls read-command-string, passing in the current state and then passes the returned command string into a call to parse-command-string. The result of this two step read process is the Clojure equivalent of a command object, which is modeled as a function from a calculator state to a state.

Digressing a moment, there are several attributes of the Clojure syntax that are worth pointing out. The most important is that, as with most Lisps, parenthesis play a major role in the syntax of the language. Parenthesis (and braces and brackets) delimit all statements and expressions, group statements into logical blocks, delimit function definitions, and serve as the syntax for composite object literals. In contrast, a language like Java uses a combination of semicolons, braces, and parsing grammar to serve the same purposes. This gives Clojure a more homogeneous syntax, but a syntax with fewer rules that’s easier to parse and analyze. Explicit statement delimiters also allow Lisp more freedom to pick symbol names. Symbols in Lisp can include characters (‘-‘, ‘<', '&', etc.) that infix languages can't use for the purpose, because the explicit statement grouping makes it easier to distinguish a symbol from its context. The topic of Lisp syntax is really interesting enough for its own lengthy series of posts and articles. Going back to the Clojure calculator's main loop, the next statement in the loop is yet another binding form. Like loop, this binding form also includes an element of control flow.

(if-let [new-state (apply-command state command)]
   (recur new-state)
   nil)

It may be easiest to see the meaning of this block of code by paraphrasing it into Java:

State newState = applyCommand(state, command);
 
if (newState != null)
    return recur(newState);
else
    return null;

What if-let does is to establish a new local variable and then conditionally pick between two alternative control flow paths based on the value of the new variable. It’s a common pattern within code, so it’s good to have a specific syntax for the purpose. What’s interesting about Clojure, though, is that if the language didn’t have it built in, a programmer working in Clojure could add it with a macro and you couldn’t tell the difference from built-in features. (In fact, the default Clojure implementation of if-let is itself a macro.)

At this point, I’ve covered the basic structure of the Clojure project, as well as the project’s main entry point. Subsequent posts will cover modeling of application state within Clojure, as well as the command parser, and the commands themselves. Once I’ve covered the basic functionality of the calculator, I’ll use that as a starting point to discuss custom syntax for command definitions, and ultimately a compiler for the calculator.

March 26, 2014

Update 2019-01-17: KSM recently redesigned their website in a way that removes the original blog. Because of this, I've taken some of what I wrote then for KSM and re-hosted it here. Thanks are due both to KSM Technology Partners for allowing me to do this and to the Wayback Machine for retaining the content. All the links below are updated to reflect the articles' new locations.


Sorry for the radio silence, but recently I've been focusing my writing time on the KSM Techology Partners Blog. My writing there is still technical in nature, but it tends to be more heavily focused on the JVM. If you're interested, here are a few of what I consider to be the highlights.

In mid-2013, I started out writing about how to use Runnable to explictly enforce dynamic extent in Java. In a nutshell, this is a way to implement try...with...resources in versions of Java that don't have it built in to the language. I then used the dynamic extent technique to build a ThreadLocal that plays nicely with thread pools. This is useful because thread pools require an understanding of which thread you're running on, which thread pooling techniques can abstract away.

Later in the year, I focused more on Clojure, starting off with a quick bit on the relationship of lexical closures to Java inner classes. I also wrote about a particular kind of stack overflow exception that can happen with lazy sequences. Lazy sequences can nicely remove the need to use recursion while traversing their length, but each time two unrealized lazy sequences are combined, it adds to the recursive depth required to compute the first element. For me, this stack overflow was a difficult error to diagnose, because it seemed so counter-intuitive.

I'm also in the middle of a series of posts that relate the GoF command pattern to functional programming. The posts start off with Java, but will ultimately describe a Clojure implementation that compiles a stack based expression language into optimized Java bytecode. If you'd like to play with the code, it's on github.

January 28, 2014

I’ll get back to rpncalc shortly, but before I do, I wanted to take a post to talk about a surprising problem I recently had with lazy sequences. As part of my day job, I am developing a Clojure based system for accumulating and displaying time series data on a web page. One of the core algorithms in my implementation is a incremental merge sort. I have a function that takes two seq’s, both ordered by time, and produces a lazy result seq with all values from both inputs, also in time order. Every few seconds, as new input values are read from their sources, the program uses the ordered merge function to integrate the new values into a seq that contains a complete history of all values. It’s a straightforward and flexible design, and initially, it appeared to work quite well. The problems only started to arise after several hours of run time: traversing the history list would then immediately result in stack overflow exceptions.

If you’re familiar with lazy sequences, this may seem like an odd result. After all, one of the benefits of lazy sequences (aside from their laziness) is that they can eliminate recursion and reduce pressure on the stack. Lazy sequences might require more heap allocation, but they shouldn’t require all that much stack. To explore this idea a bit further, I’m going to use a simpler example than my merge function, I’m going to start with a recursive version of map:

(defn my-map [ fn xs ]
  (if (empty? xs)
     ()
     (cons (fn (first xs)) (my-map fn (rest xs)))))

For simple use cases, my-map has the same interface as the built-in function map:

user> (my-map #(+ 1 %) (range 3))
(1 2 3)
user> (map #(+ 1 %) (range 3))
(1 2 3)

The limitations of my-map start to become apparent for larger sequences:

user> (map #(+ 1 %) (range 10000))
(1 2 3 4 5 6 7 8 9 10 ...)
user> (my-map #(+ 1 %) (range 10000))
StackOverflowError   clojure.lang.Numbers.add (Numbers.java:1685)

What’s happening here is that the recursive call to my-map causes the function to allocate a stack frame for each element of the input sequence. The stack has a fixed size limit, so this places a fixed limit on the size of the sequence that my-map can manipulate. Any input sequences beyond that length limit will cause the function to overflow the stack. map gets around this through laziness, which is something that we can also use:

(defn my-map-lazy [ fn xs ]
  (lazy-seq
    (if-let [ xss (seq xs) ]
     (cons (fn (first xs)) (my-map-lazy fn (rest xs)))
     ())))

With this definition, all is right with the world:

user> (my-map-lazy #(+ 1 %) (range 10000))
(1 2 3 4 5 6 7 8 9 10 ...)

While the text of my-map and my-map-lazy is similar, the functions internally are quite different in operation. my-map completely computes the result of the mapping before it returns: it eagerly evaluates and returns the fully calculated result. In contrast, my-map-lazy doesn’t compute any of the mapping before it returns: it lazily defers the calculation until later and returns a promise to compute the result later on. The difference may be more clear, looking at a slightly macro-expanded form of my-map-lazy:

(defn my-map-lazy [ fn xs ]
  (new clojure.lang.LazySeq
     (fn* []
       (if-let [xss (seq xs)]
          (cons (fn (first xs)) (my-map-lazy fn (rest xs)))
         ()))))

The only computations that happen between the entry and exit of my-map-lazy are the allocation of a new lexical closure and then the instantiation of a new instance of LazySeq. While the body my-map-lazy still contains a call to itself, the call doesn’t happen until after my-map-lazy returns and the LazySeq invokes the closure. There is no recursive call, and there is no risk of overflowing the stack. (The traversal state that was stored on the stack in the recursive version is stored on the heap in the lazy version.)

So why was my merge sort overflowing the stack? To see why, I’m going to introduce a new function, using Clojure’s internal map function. This function serves no purpose, other than to introduce a layer of laziness. It is only useful for the purposes of this discussion:

(defn lazify [ xs ]
  (map identity xs))

Because the evaluation of map is lazy, we can predict that what lazify returns is a LazySeq. This turns out to be true:

user> (.getClass [1 2 3 4 5])
clojure.lang.PersistentVector
user> (.getClass (lazify [1 2 3 4 5]))
clojure.lang.LazySeq

Calling lazify on the result of lazify produces another LazySeq, distinct from the first.

user> (def a (lazify [1 2 3 4 5]))
#'user/a
user> (.getClass a)
clojure.lang.LazySeq
user> (def b (lazify a))
#'user/b
user> (.getClass b)
clojure.lang.LazySeq
user> (identical? a b)
false

Due to the way lazify is defined, the results of the sequences a and b are identical to each other - they both result in (1 2 3 4 5). However, despite the similarity in the results they produce, the two sequences are distinct and produce their results with different code paths. Sequence a computes the identity of each element of the vector [1 2 3 4 5] and sequence b computes the identity of each element of sequence a. Sequence b has to go through sequence a to get the value from the vector that underlies both. Even in lazy sequences, this process is still eager, still recursive, and it still consumes stack.

To confirm this theory, I’ll use another function that applies lazify to a sequence any number of times.

(defn lazify-n [ n seq ]
  (loop [n n seq seq]
    (if (> n 0)
      (recur (- n 1) (lazify seq))
      seq))

This function builds a tower of lazy sequences n sequences tall. Computing even the first element of the result sequence, involves recursively computing every element of each sequence in this tower, down to the original input seq to lazify-n. The depth of the stack required to maintain this recursive stack is proprortional to n. High values of n should produce sequences that can’t be traversed without throwing a stack overflow error. This turns out to be true:

user> (lazify-n 1 [1 2 3 4 5])
(1 2 3 4 5)
user> (lazify-n 4000 [1 2 3 4 5])
StackOverflowError   clojure.core/seq (core.clj:133)

Going back to my original merge sort stack overflow, it is caused by the same issue that we see in lazify-n. The calls to merge two lists don’t merge the lists at the time of the call. Rather, the calls produce promises to merge the lists at some later point in time. Every call to merge increases the number of lists to merge, and increases the depth of the stack that the merge process needs to use during the merge operation. After a while, the number of lists to be merged gets high enough that they can’t be merged without overflowing the stack. This is the cause of my initial stack overflow.

So what’s the solution? One easy solution is to give up some amount of laziness.

(defn lazify-n! [ n seq ]
  (loop [n n seq seq]
    (if (> n 0)
      (recur (- n 1) (doall (lazify seq)))
      seq))

The only difference between this new version of lazify-n and the previous is the call to doall on the fourth line. What doall does is force the full evaluation of a lazy sequence. So, while lazify-n! still produces an n high tower of lazy sequences, they’re all been fully traversed. Because LazySeq caches values the first time it’s traversed traversal, there’s no need to recursively call up the tower of sequences to traverse the final output sequence. This gives up some laziness, but it avoids both stack overflow issues we’ve discussed in this blog post: the overflow on long input sequence lengths and the overflow on deeply nested lazy sequences. The cost (there’s always a cost) is that this requires more heap storage than many alternative structures.

August 19, 2013

As part of a team conversation this morning, I worked up a quick Java translation of some more-interesting-than-it-looks Clojure code. It’s a good example of how lexical closures map to objects.

The code we started out with was this:

(defn make-adder [x]
  (let [y x]
    (fn [z] (+ y z))))
 
(def add2 (make-adder 2))
 
(add2 4)

What this code does is define and return a new type of function that adds values to the constant x. In Java, it looks like this:

// Not needed in Clojure... the Clojure runtime implicitly gives types
// that look similar to this to all Clojure functions.
interface Function
{
   Integer invoke(Integer z);
}
 
public class Foo
{
   // (defn make-adder [x]
   //   (let [y x]
   //     (fn [z] (+ y z))))
   public static Function makeAdder(Integer x)
   {
       final Integer y = x;
 
       return new Function() {
           public Integer invoke(Integer z) {
               return z + y;
           }
       }
   }
 
   public static int main(String[] args)
   {
       // (def add2 (make-adder 2))
       Function add2 = Foo.makeAdder(2);
 
       // (add2 4)
       System.out.println(add2.invoke(4));
   }
}

Five lines of Clojure code translate into 30 lines of Java: a function definition becomes a class definition, with state.

This idiom is powerful, but coming from Java, the power is hidden behind unusual syntax and terse notation. There are good reasons for both the syntax and the notation, but if you’re not used to either, it’s very easy to look at a page of Clojure code and be completely lost. Getting past this barrier by developing an intuitive feel for the language is a major challenge faced by teams transitioning to Clojure and Scala. One of the goals I have for my posts in this blog is help fellow developers along the way. It should be a fun ride.

June 29, 2011

One typical property of Lisp systems is that they they intern symbols. Roughly speaking, when symbols are interned, two symbols with the same print name will also have the same identity. This design choice has several significant implications elsewhere in the Lisp implementation. It is also one of the places where Clojure differs from Lisp tradition.

In code, the most basic version of the intern algorithm is easy to express:

(define (intern! symbol-name)
   (unless (hash-has? *symbol-table* symbol-name)
      (hash-set! *symbol-table* symbol-name (make-symbol symbol-name)))
   (hash-ref *symbol-table* symbol-name))

This code returns the symbol with the given name in the global symbol table. If there's not already a symbol under that name in the global table, it creates a symbol with that name and stores it in the hash prior to returning it.. This ensures that make-symbol is only called once for each symbol-name, and the symbol stored in *symbol-table* is always the symbol returned for a given name. Any two calls to intern! with the same name are therefore guaranteed to return the exact, same, eq? symbol object. At a vCalc REPL, this looks like so (The fact that both symbols are printed with ##0 implies that they have the same identity.):

user> (intern! "test-symbol")
; ##0 = test-symbol
user> (intern! "test-symbol")
; ##0 = test-symbol

This design has several properties that have historially been useful when implementing a Lisp. First, by sharing the internal representation of symbols with the same print name, interning can reduce memory consumption. A careful programmer can write an implementation of interned symbols that doesn't allocate any memory on the heap unless it sees a new, distinct symbol. Interning also gives a (theoretically) cheaper mechanism for comparing two symbols for equality. Enforcing symbol identity equality for symbol name equality implies that symbol name equality can be reduced to a single machine instruction. In the early days of Lisp, these were very significant advantages. With modern hardware, they are less important. However, the semantics of interned symbols do still differ in important ways.

One example of this is that interned symbols make it easy to provide a global environment 'for free'. To see what I mean by this, here is the vCalc declaration of a symbol:

struct
{
     ...
     lref_t vcell;  // Current Global Variable Binding
     ...
} symbol;

Each symbol carries with it three fields that are specific to each symbol, and are created and initialized at the time the symbol is created. Because vcell for the symbol is created at the same time as the symbol, the global variable named by the symbol is created at the same time as the symbol itself. Accessing the value of that global variable is done through a field stored at an offset relative to the beginning of the symbol. These benefits also accrue to property lists, as they can also be stored in a field of a symbol. This is a cheap implementation strategy for global variables and property lists, but it comes at the cost of imposing a tight coupling between two distinct concepts: symbols and the global environment.

The upside of this coupling is that it encourages the use of global symbol attributes (bindings and properties). During interactive programming at a REPL, global bindings turn out to be useful because they make it easy to 'say the name' of the bindings to the environment. For bindingsthat directly map to symbols, the symbol itself is sufficient to name the binding and use it during debugging. Consider this definition:

(define *current-counter-value* 0)

(define (next-counter-value)
   (incr! *current-counter-value*)
   *current-counter-value*)

This definition of next-counter-value makes it easy to inspect the current counter value. It's stored in a global variable binding, so it can be inspected and modified during debugging using its name: *current-counter-value*. A more modular style of programming would store the current counter value in a binding local to the definition of next-counter-value:

(let ((current-counter-value 0))
  (define (next-counter-value)
    (incr! current-counter-value)
    current-counter-value))

This is 'better' from a stylistic point of view, because it reduces the scope of the binding of current-counter-value entirely to the scope of the next-counter-value function. This eliminates the chance that 'somebody else' will break encapsulation and modify the counter value in a harmful fashion. Unfortunately, 'somebody else' also includes the REPL itself. The 'better' design imposes the cost that it's no longer as easy to inspect or modify the current-counter-value from the REPL. (Doing so requires the ability to inspect or name the local bindings captured in the next-counter-value closure.)

The tight coupling between interned symbols and global variable bindings should not come as a suprise, because interning a symbol necessarily makes the symbol itself global. In a Lisp that interns symbols, the following code fragment creates two distinct local variable bindings, despite the fact that the bindings are named by the same, eq? symbol: local-variable.

(let ((local-variable 0))
   (let ((local-variable 0))
      local-variable))

The mismatch between globally interned symbols and local bindings implies that symbols cannot as directly be involved in talking about local bindings. A Common Lisp type declaration is an S-expression that says something about the variable named by a symbol.

(declare (fixnum el))

In contrast, a Clojure type declaration is a reader expression that attaches metadata to the symbol itself:

^String x

The ^ syntax in Clojure gathers up metadata and then applies it using withMeta to the next expression in the input stream. In the case of a type declaration, the metadata gets applied to the symbol naming the binding. This can be done in one of two ways. The first is to destructively update metadata attached to an interned symbol. If Clojure had done this, then each occurrance of symbol metadata would overwrite whatever metadata was there before, and that one copy of the metadata would apply to every occurance of the symbol in the source text. Every variable with the same name would have to have the same type declarations.

Clojure took the other approach, and avoids the problem by not interning symbols. This allows metadata to be bound to a symbol locally. In the Clojure equivalent of the local-variable example, there are two local variables and each are named by two distinct symbols (but with the same name).

(let [local-variable 0]
   (let [local-variable 0]
      local-variable))

The flexibility of this approch is useful, but it comes at the cost of losing the ability to store values in the symbols themselves. Global symbol property lists in Lisp have to be modeled using some other means. Global variable bindings are not stored in symbols, but rather in Vars. (This is something that compiled Lisps tend to do anyway.) These changes result in symbols in Clojure becoming slightly 'smaller' than in Lisp, and more well aligned with how they are used in moodern, lexically scoped Lisps. Because there are still global variable bindings availble in the language, the naming benefits of globals are still available for use in the REPL. It's taken a while for me to get there, but the overall effect of un-interned symbols on the design of a Lisp seems generally positive.

Older Articles...