Mike Schaeffer's Blog

January 28, 2014

I’ll get back to rpncalc shortly, but before I do, I wanted to take a post to talk about a surprising problem I recently had with lazy sequences. As part of my day job, I am developing a Clojure based system for accumulating and displaying time series data on a web page. One of the core algorithms in my implementation is a incremental merge sort. I have a function that takes two seq’s, both ordered by time, and produces a lazy result seq with all values from both inputs, also in time order. Every few seconds, as new input values are read from their sources, the program uses the ordered merge function to integrate the new values into a seq that contains a complete history of all values. It’s a straightforward and flexible design, and initially, it appeared to work quite well. The problems only started to arise after several hours of run time: traversing the history list would then immediately result in stack overflow exceptions.

If you’re familiar with lazy sequences, this may seem like an odd result. After all, one of the benefits of lazy sequences (aside from their laziness) is that they can eliminate recursion and reduce pressure on the stack. Lazy sequences might require more heap allocation, but they shouldn’t require all that much stack. To explore this idea a bit further, I’m going to use a simpler example than my merge function, I’m going to start with a recursive version of map:

(defn my-map [ fn xs ]
  (if (empty? xs)
     ()
     (cons (fn (first xs)) (my-map fn (rest xs)))))

For simple use cases, my-map has the same interface as the built-in function map:

user> (my-map #(+ 1 %) (range 3))
(1 2 3)
user> (map #(+ 1 %) (range 3))
(1 2 3)

The limitations of my-map start to become apparent for larger sequences:

user> (map #(+ 1 %) (range 10000))
(1 2 3 4 5 6 7 8 9 10 ...)
user> (my-map #(+ 1 %) (range 10000))
StackOverflowError   clojure.lang.Numbers.add (Numbers.java:1685)

What’s happening here is that the recursive call to my-map causes the function to allocate a stack frame for each element of the input sequence. The stack has a fixed size limit, so this places a fixed limit on the size of the sequence that my-map can manipulate. Any input sequences beyond that length limit will cause the function to overflow the stack. map gets around this through laziness, which is something that we can also use:

(defn my-map-lazy [ fn xs ]
  (lazy-seq
    (if-let [ xss (seq xs) ]
     (cons (fn (first xs)) (my-map-lazy fn (rest xs)))
     ())))

With this definition, all is right with the world:

user> (my-map-lazy #(+ 1 %) (range 10000))
(1 2 3 4 5 6 7 8 9 10 ...)

While the text of my-map and my-map-lazy is similar, the functions internally are quite different in operation. my-map completely computes the result of the mapping before it returns: it eagerly evaluates and returns the fully calculated result. In contrast, my-map-lazy doesn’t compute any of the mapping before it returns: it lazily defers the calculation until later and returns a promise to compute the result later on. The difference may be more clear, looking at a slightly macro-expanded form of my-map-lazy:

(defn my-map-lazy [ fn xs ]
  (new clojure.lang.LazySeq
     (fn* []
       (if-let [xss (seq xs)]
          (cons (fn (first xs)) (my-map-lazy fn (rest xs)))
         ()))))

The only computations that happen between the entry and exit of my-map-lazy are the allocation of a new lexical closure and then the instantiation of a new instance of LazySeq. While the body my-map-lazy still contains a call to itself, the call doesn’t happen until after my-map-lazy returns and the LazySeq invokes the closure. There is no recursive call, and there is no risk of overflowing the stack. (The traversal state that was stored on the stack in the recursive version is stored on the heap in the lazy version.)

So why was my merge sort overflowing the stack? To see why, I’m going to introduce a new function, using Clojure’s internal map function. This function serves no purpose, other than to introduce a layer of laziness. It is only useful for the purposes of this discussion:

(defn lazify [ xs ]
  (map identity xs))

Because the evaluation of map is lazy, we can predict that what lazify returns is a LazySeq. This turns out to be true:

user> (.getClass [1 2 3 4 5])
clojure.lang.PersistentVector
user> (.getClass (lazify [1 2 3 4 5]))
clojure.lang.LazySeq

Calling lazify on the result of lazify produces another LazySeq, distinct from the first.

user> (def a (lazify [1 2 3 4 5]))
#'user/a
user> (.getClass a)
clojure.lang.LazySeq
user> (def b (lazify a))
#'user/b
user> (.getClass b)
clojure.lang.LazySeq
user> (identical? a b)
false

Due to the way lazify is defined, the results of the sequences a and b are identical to each other - they both result in (1 2 3 4 5). However, despite the similarity in the results they produce, the two sequences are distinct and produce their results with different code paths. Sequence a computes the identity of each element of the vector [1 2 3 4 5] and sequence b computes the identity of each element of sequence a. Sequence b has to go through sequence a to get the value from the vector that underlies both. Even in lazy sequences, this process is still eager, still recursive, and it still consumes stack.

To confirm this theory, I’ll use another function that applies lazify to a sequence any number of times.

(defn lazify-n [ n seq ]
  (loop [n n seq seq]
    (if (> n 0)
      (recur (- n 1) (lazify seq))
      seq))

This function builds a tower of lazy sequences n sequences tall. Computing even the first element of the result sequence, involves recursively computing every element of each sequence in this tower, down to the original input seq to lazify-n. The depth of the stack required to maintain this recursive stack is proprortional to n. High values of n should produce sequences that can’t be traversed without throwing a stack overflow error. This turns out to be true:

user> (lazify-n 1 [1 2 3 4 5])
(1 2 3 4 5)
user> (lazify-n 4000 [1 2 3 4 5])
StackOverflowError   clojure.core/seq (core.clj:133)

Going back to my original merge sort stack overflow, it is caused by the same issue that we see in lazify-n. The calls to merge two lists don’t merge the lists at the time of the call. Rather, the calls produce promises to merge the lists at some later point in time. Every call to merge increases the number of lists to merge, and increases the depth of the stack that the merge process needs to use during the merge operation. After a while, the number of lists to be merged gets high enough that they can’t be merged without overflowing the stack. This is the cause of my initial stack overflow.

So what’s the solution? One easy solution is to give up some amount of laziness.

(defn lazify-n! [ n seq ]
  (loop [n n seq seq]
    (if (> n 0)
      (recur (- n 1) (doall (lazify seq)))
      seq))

The only difference between this new version of lazify-n and the previous is the call to doall on the fourth line. What doall does is force the full evaluation of a lazy sequence. So, while lazify-n! still produces an n high tower of lazy sequences, they’re all been fully traversed. Because LazySeq caches values the first time it’s traversed traversal, there’s no need to recursively call up the tower of sequences to traverse the final output sequence. This gives up some laziness, but it avoids both stack overflow issues we’ve discussed in this blog post: the overflow on long input sequence lengths and the overflow on deeply nested lazy sequences. The cost (there’s always a cost) is that this requires more heap storage than many alternative structures.

January 2, 2014

Up to now, the calculator’s main command loop has been a straightforward implementation of a REPL, or ‘read-eval-print-loop’. If you’re unfamiliar with the term, REPLs are the traditional means that interactive programming languages use to provide their interactivity. REPL’s provide a command prompt that a user can use to explore and manipulate the programming environment. In this way, a REPL makes it possible to work more quickly than traditional environments that require a program to be recompiled and restarted to test code changes.

While REPLs can become very complex in the details, the core idea is quite simple. As the name implies, REPL’s read a command from the user, evaluate that command, print the result of that evaluation, and loop back to start again. In rpncalc, all four of these steps are clearly evident in the code of the REPL. This is useful for explanatory purposes, but it closely couples the REPL to specific implementations of ‘read’, ‘evaluate’ and ‘print’. For this post, we’ll look into another way to model a REPL in code that offers a way to break this coupling.

The main command loop of rpncalc contains explicit code for each of the steps in an REPL:


// Set initial state
State state = new State();
 
// Loop until we no longer have a state.
while(state != null) {
 
    // Print the current state
    System.out.println();
    showStack(state);
 
    // Print a prompt, and read the next command from the user.
    System.out.print("> ");
    String cmdLine = System.console().readLine();
 
    if (cmdLine == null)
        break;
 
    Command cmd = parseCommandString(cmdLine);
 
    // Evaluate the command and produce the next state.
    state = cmd.execute(state);
}

This code is easy to read and explicit in intent, but it totally breaks down if commands can’t be read from the console. In the case of a REPL running on a server, it may be the case that a REPL needs to print and read over a (secured!) network connection. What would be useful is a way to decouple the mechanism for reading command from the loop itself.

In functionally oriented languages, this problem can be addressed by extending the REPL function with function arguments. These function arguments allow different implementations of read and print to be plugged into the same basic loop structure. Default implementations can be provided that connect to the console, with other implementations that might read and print using a network connection, or some other command transport. In Java, a similar effect can be achieved using functional interfaces (aka SAM types) to provide the pluggable alternative implementations. In fact, Java 8’s syntax for anonymous functions will make this approach syntactically convenient. Java also provides ways to achieve this extensiblity via class derivation.

Another way to view this problem can be seen by slightly changing your perspective on the REPL. It may not be completely obvious, but as with many loops, the REPL is iterating over a sequence of values. In the case of the REPL, the sequence is the sequence of commands that the user enters in response to prompts. For each command in the sequence, the REPL updates the current state and advances to the next sequence element. This isn’t as concrete as iterating over an in-memory data structure (and it isn’t necessarily bounded) but the semantics of the iteration are the same. The key to implementing this design is to provide a version of Iterable that implements iteration over a command stream. Given an iterable command stream, the REPL takes on a slightly different character:

// Set initial state
State state = new State();
 
// Loop over all input commands
for(Command cmd : new ConsoleCommandStream()) {
 
    // Evaluate the command and produce the next state.
    state = cmd.execute(state);
 
    if (state == null)
        break;
 
    // Print the current state
    showStack(state);
}

Compared to the initial loop implementation, this version is completely detached from the mechanisms used to prompt the user for input and read incoming commands. The termination criteria is also simpler: there isn’t an explicit check for the end of the command stream. The implicit termination check within the foreach loop captures that requirement.

The other component of this implementation is the implementation of the CommandStream. Unfortunately, this is where Java extracts its tax in lines of code for the additional modularity of this design. Like all iterable objects, the console command stream implements the Iterable interface. The iterator itself is defined as an anonymous inner class:

class ConsoleCommandStream implements Iterable<Command>
{
    public Iterator<Command> iterator()
    {
        return new Iterator<Command> ()
        {

One of the complexities of implementing Java’s Iterator interface is that callers must be able to call hasNext any number of times (zero to n) before each call to next. It’s not possible to assume one and only one call to hasNext for each call to next, despite the fact that the foreach does make that guarantee. Without going into the details, this implies that the actual advance operation can occur within either next or hasNext. While there are several ways to implement this, the approach I like to use is to have a separate method that advances the iterator, but only if it needs to be advanced. (Calls to next put the iterator into ‘requires advance’ state.) The advanceIfNecessary method is where the bulk of the work of the command stream takes place, including prompting the user, and reading and parsing the command.

Command nextCmd = null;
 
private void advanceIfNecessary()
{
    if (nextCmd != null)
        return;
 
    System.out.println();
    System.out.print("> ");
 
    String cmdLine = System.console().readLine();
 
    if (cmdLine == null)
        return;
 
    try {
        nextCmd = parseCommandString(cmdLine);
    } catch (Exception ex) {
        throw new RuntimeException("Error while parsing command: " + cmdLine, ex);
    }
}

In this way, Java’s built in support for iteration can be used to break the REPL apart into sub-compoments for handling the stages of command processing. The REPL is still clearly a REPL, but it no longer has explicit dependencies on the means used to acquire input commands. Unfortunately, the REPL still has explicit coupling for command evaluation and printing the result. As it stands now, we could modify the REPL to read commands from a network port, but we couldn’t redirect the output away from the local console. In the next post in the series, we’ll use the idea of reduce from functional programming to break the REPL into a pipeline of iterators. This will bring the rest of the flexibility we need.

December 17, 2013

Throughout the last four parts of this series, the common theme has been that the state of the calculator program has been managed globally. This requires the main command loop to directly update global data after each command, to prepare the state for the next command. While this works, it would be nice to remove the need for the global update. This post talks about how that’s done in the functional version of rpncalc.

The last installment in the series updated the calculator by adding a Java class to model State. This allows the entire state of the calculator to be managed as a first-class entity within the program. That’s evident from the fact that the calculator now models state through a single global reference:

public class RpnCalc extends Calculator
{
    // ...
 
    private State state = null;

Before I continue, I should clarify by saying that state isn’t truly a global variable. Obviously state is an instance variable within a single instance of RpnCalc. However, because of the fact that the calculator program only runs with a single instance of RpnCalc, any instance variables within that class are effectively global within the program, and suffer many of the same problems associated with true globals. This parallel is also true for instance variables within singletons managed by Spring. They’re instance variables, but they act like globals and should be viewed with the same kind of suspicion.

To remove the global state, we’re going to take advantage of the fact that the last update to rpncalc changed the signature for a command, so that it returns a new copy of the state:

abstract class Command
    {
        State execute(State in)

This implies that the main command loop has direct access to the updated state, and can manage it as a local variable:

public void main()
        throws Exception
    {
        State state = new State();
 
        while(state != null) {
            // ...read and gather command...
            state = cmd.execute(state);
        }
    }

This eliminates the global variable, moving the same data to a single variable contained within a function definition. This hugely reduces the variable’s footprint within the code. The global variable can be read and updated by 200 lines of code, compared to 20 lines of code for the local variable. For a developer, this makes the management of state more evident within the code, and makes it easier to write and maintain correct code.

Unfortunately, there’s a problem. For most of the commands, storing state in a local variable works well: a command like addition only needs access to the current state to produce the next state. Where it breaks down is in the undo command. In contrast to every other command, undo doesn’t need access to the current state, it needs access to the previous state. Now that we’ve moved the state management entirely within the command loop function, there’s no way for the undo command to get access to the history of states. Given that undo is a requirement, we need to find a solution.

One approach would be to extend the command loop to maintain a full history of states, and then broaden the command function signature to accept both the current and previous states. Each command would then be a function on a history of states. This approach has some appeal, but a simpler way to achieve the same result is to acknowledge the fact that each state is a function of the previous state, and store a back reference within each state:

private class State
{
    State prev;
    // ...
 
    State()
    {
        prev = null;
        // ...
    }
 
    State(State original)
    {
        prev = original;
        // ...
    }
}

This makes undo easy:

cmds.put("undo", new Command() {
        public State execute(State in) {
            // skip past the copy made for 'undo', and get to the previous.
            return in.prev.prev;
        }
    });

This design is also closely related to how git stores commits: each commit stores a back link to the previous commit in the history. It does have potentially unbounded memory usage, but our state in rpncalc is small enough in size and our memory is large enough that this shouldn’t present a problem. If it ever did present a memory consumption problem, adding a bound on the length number of retained states is as simple as walking down the previous links and nulling the previous link of the last history element to retain.

Next up, we’ll start taking a bit more advantage of the newly functional nature of rpncalc, and start using functional techniques to decompose the main command loop into component modules.

December 10, 2013

In the last installment of this series, I built a basic undo facility on top of the command pattern. One of the problems with that implementation is that the Command class has to know too much about how to save and restore the overall state of the calculator. In this post, I’ll introduce a way around this problem.

To see the problem, note that every instance of `Commandq has a private member variable for each global state variable maintained by the calculator:

private Deque<Double> oldStack;
private Double[] oldRegs;

Commands also have methods to save and restore the global state from the calculator instance itself:

void saveState()
{
    oldStack = new LinkedList<Double>(stack);
    oldRegs = Arrays.copyOf(regs, regs.length);
}
 
void restoreState()
{
    stack = new LinkedList<Double>(oldStack);
    regs = Arrays.copyOf(oldRegs, oldRegs.length);
}

Aside from the fact that Commands are directly updating the global state of the calculator, this design makes it more difficult to extend or modify the calculator’s notion of state. Any change to the calculator’s design that that adds a global state variable forces the Command class to be extended to manage the new global state. This means at least adding new private fields to Command and updating saveState and restoreState implementations. The stateobject version of RpnCalc addresses this issue by introducing a new class for the purpose of managing calculator state: State.

As you might have guessed, State has the two fields we need to store our current notion of state:

Deque<Double> stack = null;
 Double[] regs = null;

There are also two constructors, one for constructing a new, blank state, and a copy constructor for duplicating an existing state:

State()
{
    stack = new LinkedList<Double>();
    regs = new Double[NUM_REGISTERS];
}
 
State(State original)
{
    stack = new LinkedList<Double>(original.stack);
    regs = Arrays.copyOf(original.regs, original.regs.length);
}

The State class simplifies the calculator’s state variable declaration down to a single member variable:

private State state = null;

Along with that simplification, Command no longer needs to worry about state management, so it’s back down to a single method:

abstract class Command
{
    abstract State execute(State in);
}

The biggest change here is that the execute method now returns an instance of State. With state objects, the behavior of a command is now that it accepts a ‘before’ state, and then returns the state that results from applying the command to the output state. Every implementation of Command, save for two, first makes a copy of the state using the copy constructor, and then updates the copied state and returns it to the caller. The command doesn’t touch the input state, aside from the initial copy operation (which isn’t an update).

At this point, you might want to re-read that last paragraph - it’s kind of a big deal. With this latest change to the calculator, we’ve really made two big improvements. The first is the change that motivated the post in the first place: Commands no longer need to know how to save and restore state and the no longer need to be aware of the contents of that state. The second change is that Commands are no longer modifying global state at all. Each command is now a Function, mapping from state to state.

This second aspect of the design change has many positive implications, but the first is that it makes it easier to implement undo. Because the input state passed into Command.execute isn’t touched, the caller can save off a copy of the previous state in its entirety. The calculator’s main command loop does just that:

Command cmd = parseCommandString(cmdLine);
 
State initialState = state;
 
state = cmd.execute(state);
 
LastState = initialState; // (lastState is an instance variable alongside state)

The undo command itself just ignores its input state and returns whatever the lastState was:

cmds.put("undo", new Command() {
        public State execute(State in) {
            return lastState;
        }
    });

This version of the calculator gets us most of the way to a mathematically functional implementation. The next installment will take us further by removing state and lastState.

Older Articles...