Mike Schaeffer's Blog

Articles with tag: ksm
March 31, 2014

Sometimes, it’s easy to focus so much on the architecture of a system that the details of its implementation get lost. While it’s true that inattention to architectural concerns can cause a system to fail, it’s also true that poor attention to the details can undermine even the best overall system design. This post covers a few minor details of code structure that I’ve found to be useful in my work:

It’s a small thing, but one of my favorite utility methods is a short way to throw run-time exceptions.

public static void FAIL(String message)
{
    throw new RuntimeException(message);
}

Defining this method accomplishes a few useful goals. The first is that (with an import static) it makes it possible to throw a RuntimeException with 22 fewer characters of source text per call site. If you’re writing usefully descriptive error messages (which you should be), this can significantly improve the readability of the code. The text FAIL tends to stand out in source code listings, and bringing the error message closer to the left margin of the source text makes it more obvious. The symbol FAIL is also easy to identify with tools like grep, ack, and M-x occur.

To handle re-throw scenarios, it's also useful to have another definition that lets you specify a cause for the failure.

public static void FAIL(String message, Throwable cause)
{
    throw new RuntimeException(message, cause);
}

Related to this is a useful naming convention for loop control variables. Thanks in large part to FORTRAN, and its mathematical heritage, it's very common to use the names i, j, and k for loop control variables. These names aren't very descriptive, but they're short and for small loop bodies, there's usually enough context that a longer name would be superfluous. (If your loop spans pages of text, you should use a more descriptive variable name... but first, you should try to break up your loop into sensible, testable functions.) One technique I've found useful for making loop control variables more obvious (and searchable) without going to fully descriptive variable names is to double up the letters, giving ii, jj, and kk.

These are both small changes, but they both can improve the readability of the code. Try them out and see if you like them. If you disagree that they are improvements, it's easy to switch back.

Tags:javaksm
March 26, 2014

Update 2019-01-17: KSM recently redesigned their website in a way that removes the original blog. Because of this, I've taken some of what I wrote then for KSM and re-hosted it here. Thanks are due both to KSM Technology Partners for allowing me to do this and to the Wayback Machine for retaining the content. All the links below are updated to reflect the articles' new locations.


Sorry for the radio silence, but recently I've been focusing my writing time on the KSM Techology Partners Blog. My writing there is still technical in nature, but it tends to be more heavily focused on the JVM. If you're interested, here are a few of what I consider to be the highlights.

In mid-2013, I started out writing about how to use Runnable to explictly enforce dynamic extent in Java. In a nutshell, this is a way to implement try...with...resources in versions of Java that don't have it built in to the language. I then used the dynamic extent technique to build a ThreadLocal that plays nicely with thread pools. This is useful because thread pools require an understanding of which thread you're running on, which thread pooling techniques can abstract away.

Later in the year, I focused more on Clojure, starting off with a quick bit on the relationship of lexical closures to Java inner classes. I also wrote about a particular kind of stack overflow exception that can happen with lazy sequences. Lazy sequences can nicely remove the need to use recursion while traversing their length, but each time two unrealized lazy sequences are combined, it adds to the recursive depth required to compute the first element. For me, this stack overflow was a difficult error to diagnose, because it seemed so counter-intuitive.

I'm also in the middle of a series of posts that relate the GoF command pattern to functional programming. The posts start off with Java, but will ultimately describe a Clojure implementation that compiles a stack based expression language into optimized Java bytecode. If you'd like to play with the code, it's on github.

January 28, 2014

I’ll get back to rpncalc shortly, but before I do, I wanted to take a post to talk about a surprising problem I recently had with lazy sequences. As part of my day job, I am developing a Clojure based system for accumulating and displaying time series data on a web page. One of the core algorithms in my implementation is a incremental merge sort. I have a function that takes two seq’s, both ordered by time, and produces a lazy result seq with all values from both inputs, also in time order. Every few seconds, as new input values are read from their sources, the program uses the ordered merge function to integrate the new values into a seq that contains a complete history of all values. It’s a straightforward and flexible design, and initially, it appeared to work quite well. The problems only started to arise after several hours of run time: traversing the history list would then immediately result in stack overflow exceptions.

If you’re familiar with lazy sequences, this may seem like an odd result. After all, one of the benefits of lazy sequences (aside from their laziness) is that they can eliminate recursion and reduce pressure on the stack. Lazy sequences might require more heap allocation, but they shouldn’t require all that much stack. To explore this idea a bit further, I’m going to use a simpler example than my merge function, I’m going to start with a recursive version of map:

(defn my-map [ fn xs ]
  (if (empty? xs)
     ()
     (cons (fn (first xs)) (my-map fn (rest xs)))))

For simple use cases, my-map has the same interface as the built-in function map:

user> (my-map #(+ 1 %) (range 3))
(1 2 3)
user> (map #(+ 1 %) (range 3))
(1 2 3)

The limitations of my-map start to become apparent for larger sequences:

user> (map #(+ 1 %) (range 10000))
(1 2 3 4 5 6 7 8 9 10 ...)
user> (my-map #(+ 1 %) (range 10000))
StackOverflowError   clojure.lang.Numbers.add (Numbers.java:1685)

What’s happening here is that the recursive call to my-map causes the function to allocate a stack frame for each element of the input sequence. The stack has a fixed size limit, so this places a fixed limit on the size of the sequence that my-map can manipulate. Any input sequences beyond that length limit will cause the function to overflow the stack. map gets around this through laziness, which is something that we can also use:

(defn my-map-lazy [ fn xs ]
  (lazy-seq
    (if-let [ xss (seq xs) ]
     (cons (fn (first xs)) (my-map-lazy fn (rest xs)))
     ())))

With this definition, all is right with the world:

user> (my-map-lazy #(+ 1 %) (range 10000))
(1 2 3 4 5 6 7 8 9 10 ...)

While the text of my-map and my-map-lazy is similar, the functions internally are quite different in operation. my-map completely computes the result of the mapping before it returns: it eagerly evaluates and returns the fully calculated result. In contrast, my-map-lazy doesn’t compute any of the mapping before it returns: it lazily defers the calculation until later and returns a promise to compute the result later on. The difference may be more clear, looking at a slightly macro-expanded form of my-map-lazy:

(defn my-map-lazy [ fn xs ]
  (new clojure.lang.LazySeq
     (fn* []
       (if-let [xss (seq xs)]
          (cons (fn (first xs)) (my-map-lazy fn (rest xs)))
         ()))))

The only computations that happen between the entry and exit of my-map-lazy are the allocation of a new lexical closure and then the instantiation of a new instance of LazySeq. While the body my-map-lazy still contains a call to itself, the call doesn’t happen until after my-map-lazy returns and the LazySeq invokes the closure. There is no recursive call, and there is no risk of overflowing the stack. (The traversal state that was stored on the stack in the recursive version is stored on the heap in the lazy version.)

So why was my merge sort overflowing the stack? To see why, I’m going to introduce a new function, using Clojure’s internal map function. This function serves no purpose, other than to introduce a layer of laziness. It is only useful for the purposes of this discussion:

(defn lazify [ xs ]
  (map identity xs))

Because the evaluation of map is lazy, we can predict that what lazify returns is a LazySeq. This turns out to be true:

user> (.getClass [1 2 3 4 5])
clojure.lang.PersistentVector
user> (.getClass (lazify [1 2 3 4 5]))
clojure.lang.LazySeq

Calling lazify on the result of lazify produces another LazySeq, distinct from the first.

user> (def a (lazify [1 2 3 4 5]))
#'user/a
user> (.getClass a)
clojure.lang.LazySeq
user> (def b (lazify a))
#'user/b
user> (.getClass b)
clojure.lang.LazySeq
user> (identical? a b)
false

Due to the way lazify is defined, the results of the sequences a and b are identical to each other - they both result in (1 2 3 4 5). However, despite the similarity in the results they produce, the two sequences are distinct and produce their results with different code paths. Sequence a computes the identity of each element of the vector [1 2 3 4 5] and sequence b computes the identity of each element of sequence a. Sequence b has to go through sequence a to get the value from the vector that underlies both. Even in lazy sequences, this process is still eager, still recursive, and it still consumes stack.

To confirm this theory, I’ll use another function that applies lazify to a sequence any number of times.

(defn lazify-n [ n seq ]
  (loop [n n seq seq]
    (if (> n 0)
      (recur (- n 1) (lazify seq))
      seq))

This function builds a tower of lazy sequences n sequences tall. Computing even the first element of the result sequence, involves recursively computing every element of each sequence in this tower, down to the original input seq to lazify-n. The depth of the stack required to maintain this recursive stack is proprortional to n. High values of n should produce sequences that can’t be traversed without throwing a stack overflow error. This turns out to be true:

user> (lazify-n 1 [1 2 3 4 5])
(1 2 3 4 5)
user> (lazify-n 4000 [1 2 3 4 5])
StackOverflowError   clojure.core/seq (core.clj:133)

Going back to my original merge sort stack overflow, it is caused by the same issue that we see in lazify-n. The calls to merge two lists don’t merge the lists at the time of the call. Rather, the calls produce promises to merge the lists at some later point in time. Every call to merge increases the number of lists to merge, and increases the depth of the stack that the merge process needs to use during the merge operation. After a while, the number of lists to be merged gets high enough that they can’t be merged without overflowing the stack. This is the cause of my initial stack overflow.

So what’s the solution? One easy solution is to give up some amount of laziness.

(defn lazify-n! [ n seq ]
  (loop [n n seq seq]
    (if (> n 0)
      (recur (- n 1) (doall (lazify seq)))
      seq))

The only difference between this new version of lazify-n and the previous is the call to doall on the fourth line. What doall does is force the full evaluation of a lazy sequence. So, while lazify-n! still produces an n high tower of lazy sequences, they’re all been fully traversed. Because LazySeq caches values the first time it’s traversed traversal, there’s no need to recursively call up the tower of sequences to traverse the final output sequence. This gives up some laziness, but it avoids both stack overflow issues we’ve discussed in this blog post: the overflow on long input sequence lengths and the overflow on deeply nested lazy sequences. The cost (there’s always a cost) is that this requires more heap storage than many alternative structures.

January 2, 2014

Up to now, the calculator’s main command loop has been a straightforward implementation of a REPL, or ‘read-eval-print-loop’. If you’re unfamiliar with the term, REPLs are the traditional means that interactive programming languages use to provide their interactivity. REPL’s provide a command prompt that a user can use to explore and manipulate the programming environment. In this way, a REPL makes it possible to work more quickly than traditional environments that require a program to be recompiled and restarted to test code changes.

While REPLs can become very complex in the details, the core idea is quite simple. As the name implies, REPL’s read a command from the user, evaluate that command, print the result of that evaluation, and loop back to start again. In rpncalc, all four of these steps are clearly evident in the code of the REPL. This is useful for explanatory purposes, but it closely couples the REPL to specific implementations of ‘read’, ‘evaluate’ and ‘print’. For this post, we’ll look into another way to model a REPL in code that offers a way to break this coupling.

The main command loop of rpncalc contains explicit code for each of the steps in an REPL:


// Set initial state
State state = new State();
 
// Loop until we no longer have a state.
while(state != null) {
 
    // Print the current state
    System.out.println();
    showStack(state);
 
    // Print a prompt, and read the next command from the user.
    System.out.print("> ");
    String cmdLine = System.console().readLine();
 
    if (cmdLine == null)
        break;
 
    Command cmd = parseCommandString(cmdLine);
 
    // Evaluate the command and produce the next state.
    state = cmd.execute(state);
}

This code is easy to read and explicit in intent, but it totally breaks down if commands can’t be read from the console. In the case of a REPL running on a server, it may be the case that a REPL needs to print and read over a (secured!) network connection. What would be useful is a way to decouple the mechanism for reading command from the loop itself.

In functionally oriented languages, this problem can be addressed by extending the REPL function with function arguments. These function arguments allow different implementations of read and print to be plugged into the same basic loop structure. Default implementations can be provided that connect to the console, with other implementations that might read and print using a network connection, or some other command transport. In Java, a similar effect can be achieved using functional interfaces (aka SAM types) to provide the pluggable alternative implementations. In fact, Java 8’s syntax for anonymous functions will make this approach syntactically convenient. Java also provides ways to achieve this extensiblity via class derivation.

Another way to view this problem can be seen by slightly changing your perspective on the REPL. It may not be completely obvious, but as with many loops, the REPL is iterating over a sequence of values. In the case of the REPL, the sequence is the sequence of commands that the user enters in response to prompts. For each command in the sequence, the REPL updates the current state and advances to the next sequence element. This isn’t as concrete as iterating over an in-memory data structure (and it isn’t necessarily bounded) but the semantics of the iteration are the same. The key to implementing this design is to provide a version of Iterable that implements iteration over a command stream. Given an iterable command stream, the REPL takes on a slightly different character:

// Set initial state
State state = new State();
 
// Loop over all input commands
for(Command cmd : new ConsoleCommandStream()) {
 
    // Evaluate the command and produce the next state.
    state = cmd.execute(state);
 
    if (state == null)
        break;
 
    // Print the current state
    showStack(state);
}

Compared to the initial loop implementation, this version is completely detached from the mechanisms used to prompt the user for input and read incoming commands. The termination criteria is also simpler: there isn’t an explicit check for the end of the command stream. The implicit termination check within the foreach loop captures that requirement.

The other component of this implementation is the implementation of the CommandStream. Unfortunately, this is where Java extracts its tax in lines of code for the additional modularity of this design. Like all iterable objects, the console command stream implements the Iterable interface. The iterator itself is defined as an anonymous inner class:

class ConsoleCommandStream implements Iterable<Command>
{
    public Iterator<Command> iterator()
    {
        return new Iterator<Command> ()
        {

One of the complexities of implementing Java’s Iterator interface is that callers must be able to call hasNext any number of times (zero to n) before each call to next. It’s not possible to assume one and only one call to hasNext for each call to next, despite the fact that the foreach does make that guarantee. Without going into the details, this implies that the actual advance operation can occur within either next or hasNext. While there are several ways to implement this, the approach I like to use is to have a separate method that advances the iterator, but only if it needs to be advanced. (Calls to next put the iterator into ‘requires advance’ state.) The advanceIfNecessary method is where the bulk of the work of the command stream takes place, including prompting the user, and reading and parsing the command.

Command nextCmd = null;
 
private void advanceIfNecessary()
{
    if (nextCmd != null)
        return;
 
    System.out.println();
    System.out.print("> ");
 
    String cmdLine = System.console().readLine();
 
    if (cmdLine == null)
        return;
 
    try {
        nextCmd = parseCommandString(cmdLine);
    } catch (Exception ex) {
        throw new RuntimeException("Error while parsing command: " + cmdLine, ex);
    }
}

In this way, Java’s built in support for iteration can be used to break the REPL apart into sub-compoments for handling the stages of command processing. The REPL is still clearly a REPL, but it no longer has explicit dependencies on the means used to acquire input commands. Unfortunately, the REPL still has explicit coupling for command evaluation and printing the result. As it stands now, we could modify the REPL to read commands from a network port, but we couldn’t redirect the output away from the local console. In the next post in the series, we’ll use the idea of reduce from functional programming to break the REPL into a pipeline of iterators. This will bring the rest of the flexibility we need.

Older Articles...