Aspects without Aspects

In the previous blog posts, we saw that we could hide the problematic concrete exception type from the C# compiler by tucking it inside a transformation from a closure of type Func<TR> to another closure of the same type. But of course we can use such transformations for many things besides exception handling. Any old behaviour that we would like to throw on top of the business logic, we can apply in layers using this approach.

This capability is so cool that I took a break from writing this blog post to share my enthusiasm with my wife. She was like, “what are you blogging about?”, and I was like “there’s this really cool thing you can do, where you apply this transformation to some method call, and then you can like, do additional stuff with it, entirely transparently!”, and she was like “like what?”, and I was like “like anything!”, and she was like “like what?”, and I was like “anything you want!”, but she was still like “like what though?” and then I turned more like “uh… uh… like say you had this method that returned a string – you could easily transform that into a method that looked exactly the same, but returned the reversed string instead”, and she was like “…the reversed string? why?” and I was like “or-or-or maybe you could return the uppercase string instead…?”, and she was like “uppercase?” with totally uppercase eyebrows and I was like “nonono! I got it! say you had this method that did a really expensive and slow computation, you could turn that into a method that would keep along the result that computation, so you didn’t have to do the actual computation all the time”, and she was like “oh, that’s cool” and I was like “phew! I’m never talking to her about blogging again!”.

So that was a close call. But yes, you can totally use this for caching. All we need is a suitable transformation thing.

Here, we’re taking advantage of the fact that C# has mutable closures – that is, that we can write to cached and t from inside the body of the lambda expression.

To verify that it works, we need a suitable example – something that’s really expensive and slow to compute. And as we all know, one of the most computationally intensive things we can do in a code example is to sleep:

Well, what kind of behaviour should we expect from this code? Obviously, the first three q calls will be slow. But what about the three last? The three last execute the caching closure instead. When we execute the forth call, cached is false, and so the if test fails, and we proceed to evaluate the original, non-caching q (which is slow), tuck away the result value for later, set the cached flag to true, and return the computed result – the hard-obtained string. But the fifth and sixth calls should be quick, since cached is now true, and we have a cached result value to return to the caller, without ever having to resort to the original q.

That’s theory. Here’s practice:

So that seems to work according to plan. What else can we do? We’ve seen exception handling in the previous posts and caching in this one – both examples of so-called “cross-cutting concerns” in our applications. Cross-cutting concerns was hot terminology ten years ago, when the enterprise world discovered the power of the meta-object protocol (without realizing it, of course). It did so in the guise of aspect-oriented programming, which carried with it a whole vocabulary besides the term “cross-cutting concerns” itself, including “advice” (additional behaviour to handle), “join points” (places in your code where the additional behaviour may be applied) and “pointcuts” (a way of specifying declaratively which join points the advice applies to). And indeed, we can use these transformations that we’ve been doing to implement a sort of poor man’s aspects.

Why a poor man’s aspects? What’s cheap about them? Well, we will be applying advice at various join points, but we won’t be supporting pointcuts to select them. Rather, we will be applying advice to the join points manually. Arguably, therefore, it’s not really aspects at all, and yet we get some of the same capabilities. That’s why we’ll call them aspects without aspects. Makes sense?

Let’s consider wrapping some closure f in a hypothetical try-finally-block, and see where we might want to add behaviour.

So we’ll create extension methods to add behaviour in those three places. We’ll call them Before, Success and After, respectively.

Note that we have two options for each of the join points that occur after the call to the original f closure. In some cases you might be interested in the value returned by f, in others you might not be.

How does it work in practice? Let’s look at a contrived example.

So here we have a transformation thing that takes a Func<string> closure and returns another Func<string> closure, with several pieces of advice applied. Can you work out when the different closures will be executed?

We start with some closure fn, but before fn executes, the first Before must execute (that’s why we call it Before!). Assuming both of these execute successfully (without throwing an exception), the Success will execute. But before all these things, the second Before must execute! And finally, regardless of how the execution turns out with respect to exceptions, the After should execute.

In the case of m1, no exception occurs, so we should see the message “Successfully obtained: Hello Kiczales!” in between “Executing m1…” and “What did I get? Hello Kiczales!”. In the case of m2, on the other hand, we do get an exception, so the Success closure is never executed.

A screenshot of my console verifies this:

Screen Shot 2014-07-04 at 10.59.07 PM

So we’ve seen that we can do fluent exception handling, caching and aspects without aspects using the same basic idea: we take something of type Func<TR> and produce something else of the same type. Of course, this means that we’re free to mix and match all of these things if we wanted to, and compose them all using Linq’s Aggregate method! For once, though, I think I’ll leave that as an exercise for the reader.

And of course, we can transform other things besides closures as well – we can use the same approach to transform any instance of type T to some other T instance. In fact, let’s declare a delegate to capture such a generalized concept:

Why Decorate? Well, nothing is ever new on this blog. I’m just rediscovering old ideas and reinventing flat tires as Alan Kay put it. In this case, it turns out that all we’ve been doing is looking at the good old Decorator pattern from the GoF book in a new or unfamiliar guise.


Rethrow recoil

The closure-based exception handling scheme in the previous blog post works almost perfectly, and it would have worked entirely perfectly if a catch block were an ordinary block of code. But alas, it is not. Not quite. A catch block is special in that there is one statement that you can make inside a catch block that you cannot make anywhere else in your program. Unfortunately, it is also a fairly common statement to use inside catch blocks: the rethrow statement – that is, a throw statement with no operand. We cannot simply use a throw statement with an operand instead, since that has different semantics. In fact, it’s not a rethrow at all, it’s a brand new throw (even when we’re using the exception we just caught). The consequence is that the original stack trace is lost. In other words, we lose trace of where the exception originally occurred, which is almost never what we want.

So that’s unfortunate. Unacceptable even. Can we fix it? Of course we can fix it! We’re programmers, we can fix anything!

There’s no way to put a rethrow into our lambda expession though, so we need to do something else. That something else turns out to be trivial. We do have a genuine catch-block available, so we’ll just put it there. Or rather, we’ll create a new try-catch block with a hard-coded rethrow inside, and put that block inside a new extension method which complements the one we created last time. Like so:

In this case, we use an Action<TE> instead of a Func<TE, TR>, because obviously the exception handler won’t be returning anything – we just hard-coded a rethrow in there!

Why Touch and not an overload of the Catch method we saw before? Well, first of all we’re not catching the exception, we’re merely touching it on the way through – hence Catch is not really a suitable name for what we’re doing. And besides, the following code snippet would be ambiguous, at least to the reader of the code. Assuming we had overloaded the Catch method, how should we interpret something like this?

Is that an Action<TE> or a Func<TE, TR>? I have no idea. It turns out that the compiler is OK with the ambivalence – it decides it must be an Action<TE> (which leaves us with a rather strange catch block where the throw ex statement in the handler terminates the block before the rethrow occurs). But I’m not! Better to be explicit. Touch it is.

Now we can write code like this:

So what happens in the last line? Well, if an ArgumentException is thrown, we’ll see the string “Something” written. If a NullReferenceException is thrown, however, we’ll see the “I saw you” string, in addition to “ex”, since the exception percolates up to the outer handler where it is caught and swallowed.

Fixed!


Linq to Exceptions

Lately I’ve been thinking about the mundane task of exception handling. Remarkably unsexy, and yet completely mandatory. There’s no escape from handling the exceptions that may (and therefore will) occur in your application, unless you want it to crash or misbehave regularly.

Proper exception handling can be a chore anywhere in your application, but in particular at integration points. Whenever your application needs to communicate with some external service, any number of things can go wrong, at various levels in the stack – from application level errors to network problems. We need to be able to cope with all these different kinds of errors, and hence we surround each service call with a veritable flood of exception handlers. That sort of works, but there are problems – first, the actual service is drowning amidst all the exception handling, and second, we get a lot of code duplication, since we tend to handle many kinds of exceptions in the same way for all service calls.

I’ve been discussing how to best solve this problem in a completely general and flexible way in several impromptu popsicle- and coffee-driven sessions with some of my compatriots at work (in particular Johan and Jonas – thanks guys!). Some of the code examples in this blog post, especially those who exhibit some trace of intelligence, can legitimately be considered rip-offs, evolutions, misunderstandings or various other degenerations of similar code snippets they have come up with.

But I’m getting ahead of myself. Let’s return to the root of the problem: how exception handling code has a tendency to overshadow the so-called business logic in our applications and to cause severe cases of duplication.

The problem is minor if there is a single exception we’re catching:

However, things regress quickly with additional types of exceptions:

This is a bit messy, and the signal-to-noise-ratio is low. It gets much worse when you have a second piece of interesting code that you need to guard with the same exception handling. Suddenly you have rampant code repetition on your hands.

A solution would be to use a closure to inject the interesting code into a generic exception handling method, like this:

And then you can use this in your methods:

Adding another method is trivial:

This works pretty nicely and solves our immediate issue. But there are a couple of limitations. In particular, three problems spring to mind:

  1. What if you want to return some legal, non-default value of type TR from one or more of the catch-blocks? As it stands now, each catch-block must either rethrow or return the default value of TR.
  2. What if there are variations in how you want to handle some of the exceptions? For instance, it may be that YetAnotherException should be handled differently for each method.
  3. What if there are slight variations between the “catching” needs for the various methods? What if you decided that SafeMethod2 doesn’t need to handle SomeOtherException, whereas SafeMethod3 should handle IdiosyncraticException in addition to the “standard” ones?
  4. As an answer to the two first problems, you could pass in each exception handler to the Call method! Then you would have a method like this:

    And at this point you’re about to stop reading this blog post, because WTF. Now your methods look like this:

    So we’re pretty much back at square one, except it’s a bit more convoluted and confusing for the casual reader. And we still don’t have a good solution for the third problem. We could of course fake “non-handling” of SomeOtherException for SafeMethod2 by simply handing in a non-handling handler, that is, one that simply rethrows directly: ex => { throw ex; }. But that’s ugly and what about the IdiosyncraticException? Well, it’s not going to be pretty:

    Which just might be the worst code ever, and also has the idiosyncracy that the additional catch-handler will only be reached if the Exception handler rethrows. Horrible. Better, perhaps, to put it inside?

    Well yes, slightly better, but still pretty horrible, and much worse than just suffering the duplication in the first place. But maybe we’ve learned something. What we need is composition – our current solution doesn’t compose at all. we need to be able to put together exactly the exception handlers we need for each method, while at the same time avoiding repetition. The problem is in some sense the coupling between the exception handlers. What if we tried a different approach, handling a single exception at a time?

    We could have a less ambitious Call-method, that would handle just a single type of exception for a method. Like this:

    Now we have a single generic exception handler h. Note that when we constrain the type variable TE to be a subclass of Exception, we can use TE in the catch clause, to select precisely the exceptions we would like to catch. Then we could write a method like this:

    What if we wanted to catch another exception as well? The solution is obvious:

    And yet another exception?

    You get the picture. Of course, we can collapse all three to a single method if we want to:

    What have we gained? Well, not readability, I’ll admit that. But we’ve gained flexibility! Flexibility goes a long way! And we’ll work on the readability shortly. First, though: just in case it’s not clear, what we’ve done is that we’ve created an exception handling scenario that is similar to this:

    So there’s nothing very complicated going on here. In fact, I bet you can see how similar the two methods really are – the structure is identical! All we’ve done is replace the familiar try-catch construct with our own Call-construct.

    As an aside, we should note that the composed try-catch approach has slightly different semantics than the sequential, coupled try-catch approach. The difference in semantics is due to decoupling provided by the composed try-catch approach – each catch-block is completely independent. Therefore, there is nothing stopping us from having multiple catch-handlers for the same type of exception should we so desire.

    Now, to work on the readability a bit. What we really would like is some way to attach catch-handlers for various exception types to our function call. So assuming that we wrap up our original function call in a closure using a delegate of type Func<TR>, we would like to be able to attach a catch-handler for some exception type TE, and end up with a new closure that still has the type Func<TR>. Then we would have encapsulated the exception handling completely. Our unambitious Call-method from above is almost what we need, but not quite. Instead, let’s define an extension method on the type that we would like to extend! Func<TR>, that is:

    So the trick is to return a new closure that encapsulates calling the original closure and the exception handling. Then we can write code like this:

    Now the neat thing is that you can very easily separate out the catch-handler-attachment from the rest of the code:

    So we have essentially created a fluent interface for attaching catch-handlers to a method call. The cool thing is that it is trivial to attach additional exception handlers as needed – and since we do so programmatically, we can even have logic to control the attachment of handlers. Say we discovered that we needed to catch WerewolfExceptions when the moon is full? No problem:

    In my eyes, this is pretty cool. You might be running away screaming, thinking I’m crazy and that with this approach, you’ll never know which exceptions you’re actually catching anymore. You could be right. Opinions differ.

    But that’s OK. All I’m doing is providing an alternative approach to the handling of multiple exception – one that I think offers increased power and flexibility. I’m not saying you should take advantage of it. With greater power comes greater responsibility and all that.

    And besides, we still haven’t talked about Linq. An alternative (and attractive) solution to our current fluent interface is to attach a sequence of catch-handlers at once. Something like this:

    However, it’s surprisingly difficult to provide a suitable type for that sequence of catch-handlers – in fact, the C# compiler fails to do so! The problem is that delegates are contravariant in their parameters, which means that a delegate D1 is considered a subtype of delegate D2 if the parameters of D1 are supertypes of the parameters of D2. That’s all a bit abstract, so perhaps an example will help:

    To make sense of the abstract description above, assume that D1 is Action<object> and D2 is Action<string>. Since the D1 parameter (object) is a supertype of the D2 parameter (string), it follows that D1 is a subtype of D2 – and not the other way around, as we might have guessed. This is why the C# compiler won’t let us assign a D2 instance to a D1 reference.

    The implication is that the C# compiler will fail to find a type that will reconcile the catch handlers above. In particular, due to the contravariance of delegate parameters, we cannot type the sequence as Func<Exception, TR>, since neither Func<NullReferenceException, TR>, nor Func<InvalidOperationException, TR>, nor Func<FormatException, TR> are assignable to Func<Exception, TR>. It would go the other way around: we could assign a Func<Exception, TR> to all three of the other types, but which one should the compiler pick? If it (arbitrarily) picked Func<NullReferenceException, TR>, clearly it wouldn’t work for the two other delegates – and all other choices have the same problem.

    So we’re stuck. Sort of. The only solution we have is to hide the exception type somehow, so that we don’t have to include the exception type in the type of the sequence. Now how do we do that? Well, in some sense, we’ve already seen an example of how to do that: we hide the exception handling (and the type) inside a closure. So all we need is some way to convert an exception handler to a simple transformation function that doesn’t care about the type of the exception itself. Like this:

    So what is this thing? It’s a method that encapsulates the catch-handler inside a closure. This closure will take as input a closure of type Func<TR> and produce as output another closure of type Func<TR>. In the process, we have hidden the type TE, so that the C# compiler doesn’t have to worry about it anymore: all we have is a thing that will transform a Func<TR> to another Func<TR>.

    So now we can sort of accomplish what we wanted, even though it’s less than perfect.

    But now we can have some fun using Linq’s Aggregate method to compose our exception handlers. So we might write code like this:

    The cool part is obviously the Aggregate call, where acc is the “accumulated” composed closure, nxt is the next encapsulated exception handler and thing is the thing we’re trying to protect with our exception handlers – so in other words, the closure that contains the call to FetchStringSomewhere.

    And of course we can now implement CatchAll if we want to:

    Now please, if you are Eric Lippert and can come up with code that proves that I’m wrong with respect to the typing of sequences of exception handler delegates – please let me know! I would very much like to be corrected if that is the case.


Another wild tail chase

It appears I’ve been waiting in vain! It’s been more than a month since my last blog post, and still no pull requests for TailCop! In particular, no pulls requests that implement rewriting of recursive calls to loops for instance methods. I don’t know why.

I guess it’s up to me, then.

To recap, TailCop is a simple utility I wrote to rewrite tail-recursive static methods to use loops instead (which prevents stack overflow in cases where the recursion goes very deep). The reason we shied away from instance methods last time is dynamic dispatch, which complicates matters a bit. We’ll tackle that in this blog post. To do so, however, we need to impose a couple of constraints.

First, we need to make sure that the instance method is non-virtual, that is, that it cannot be overridden in a subclass. Why? Well, let’s suppose you let Add be virtual, so that people may override it. Sounds reasonable? If it isn’t overridden, then it will behave just the same whether or not it’s rewritten. If it is overridden, then it shouldn’t matter if we override the recursive or the rewritten version, right? Well, you’d think, but unfortunately that’s not the case.

Say you decided to make Add virtual and rewrote it using TailCop. A few months pass by. Along comes your enthusiastic, dim-witted co-worker, ready to redefine the semantics of Add. He’s been reading up on object-orientation and is highly motivated to put all his hard-won knowledge to work. Unfortunately, he didn’t quite get to the Liskov thing, and so he ends up with this:

So while he overrides the Add method in a subclass, he doesn’t replace it wholesale – he still invokes the original Add method as well. But then we have a problem. Due to dynamic dispatch, the recursive Add call in Adder will invoke BlackAdder.Add which will then invoke Adder.Add and so forth. Basically we’re taking the elevator up and down in the inheritance hierarchy. If we rewrite Adder.Add to use a loop, we will never be allowed to take the elevator back down to BlackAdder.Add. Obviously, then, the rewrite is not safe. Running BlackAdder.Add(30, 30) yields 90 with the original recursive version of Adder.Add and 61 with the rewritten version. Long story short: we will not rewrite virtual methods.

Our second constraint is that, obviously, the recursive call has to be made on the same object instance. If we call the same method on a different instance, there’s no guarantee that we’ll get the same result. For instance, if the calculation relies on object state in any way, we’re toast. So we need to invoke the method on this, not that. So how do we ensure that a recursive call is always made on the same instance – that is, on this? Well, obviously we need to be in a situation where the evaluation stack contains a reference to this in the right place when we’re making the recursive call. In IL, the this parameter to instance methods is always passed explicitly, unlike in C#. So a C# instance method with n parameters is represented as a method with n+1 parameters on the IL level. The additional parameter in IL is for the this reference, and is passed as the first parameter to the instance method. (This is similar to Python, where the self parameter is always passed explicitly to instance methods.) So anyways, if we take the evaluation stack at the point of a call to an instance methods and pop off n values (corresponding to the n parameters in the C# method), we should find a this there. If we find something else, we won’t rewrite.

While the first constraint is trivial to check for, the second one is a bit more involved. What we have at hand is a data-flow problem, which is a big thing in program analysis. In our case, we need to identify places in the code where this references are pushed onto the stack, and emulate how the stack changes when different opcodes are executed. To model the flow of data in a method (in particular: flow of this references), we first need to construct a control flow graph (CFG for short). A CFG shows (statically) the different possible execution paths through the method. It consists of nodes that represents blocks of instructions and edges that represents paths from one such block to another. A method without branching instructions has a trivial CFG, consisting of a single node representing a block with all the instructions in the method. Once we have branching instructions, however, things become a little more interesting. For instance, consider the code below:

The CFG for (the IL representation of) that code looks like this:

cfg

As you can see, some nodes now have multiple inbound edges. This matters when we try to describe how data flows through the method. Let’s see if we can sketch it out by looking at what happens inside a single node first. A node represents a block of n instructions. Each instruction can be thought of as a function f : S -> S' that accepts a stack as input and produces a stack as output. We can then produce an aggregated stack transformation function for an entire node in the CFG by composing such functions. Since each node represents a block of n instructions, we have a corresponding sequence of functions f0, f1, ..., fn-1 of stack transformations. We can use this sequence to compose a new function g : S -> S' by applying each function fi in order, as follows: g(s) = fn-1(...(f1(f0(s)))). In superior languages, this is sometimes written g = fn-1 o ... o f1 o f0, where o is the composition operator.

Each node in the CFG is associated with such a transformation function g. Now the edges come into play: since it is possible to arrive at some node n in the CFG by following different paths, we may end up with more than a single stack as potential input for n‘s g function – and hence more than a single stack as potential output. In general, therefore, we associate with each node a set I of distinct input stacks and a set O of distinct output stacks. Obviously, if there is an edge from node n to node m in CFG, then all stacks in n‘s output set will be elements in m‘s input set. To determine the sets I and O for each node in the CFG, we traverse the edges in the CFG until the various Is and Os stabilize, that is, until we no longer produce additional distinct stacks in any of the sets.

This gives us the following pseudocode for processing a node in the CFG, given a set S of stacks as input:

Initially, I and O for all nodes will be empty sets. We start processing at the entry node, with S containing just the empty stack. When we’re done, each node will have their appropriate sets I and O.

So now we have the theory pretty much in place. We still need a way to dermine the potential stack state(s) at the place in the code where it matters, though: at the call instruction for the recursive method call. It’s very easy at this point – we already have all the machinery we need. Assuming that a recursive call happens as the k‘th instruction in some node, all we have to do is compose the function h(s) = fk-1(...f1(f0(s))), alternatively h = fk-1 o ... o f1 o f0. Mapping h onto the stack elements of the node’s I set, we get a set C of stack elements at the call site. Now we pop off any “regular” argument values for the method call off the stacks in C to produce a new set C'. Finally we verify that for all elements in C', we have a this reference at the top of the stack.

Now we should be in good shape to tackle the practicalities of our implementation. One thing we obviously need is a data type to represent our evaluation stack – after all, our description above is littered with stack instances. The stack can be really simple, in that it only needs to distinguish between two kinds of values: this and anything else. So it’s easy, we’ll just use the plain ol’ Stack<bool>, right? Unfortunately we can’t, since Stack<bool> is mutable (in that push and pop mutates the stack they operate on). That’s definitely not going to work. When producing the stack instances in O, we don’t want the g function to mutate the actual stack instances in I in the process. We might return later on with stack instances we’ll want to compare to the instances in I, so we need to preserve those as-is. Hence we need an immutable data structure. So we should use ImmutableStack<bool> from the Immutable Collections, right? I wish! Unfortunately, ImmutableStack<bool>.Equals has another problem (which Stack<bool> also has) – it implements reference equality, whereas we really need value equality (so that two distinct stack instances containing the same values are considered equal). So I ended up throwing together my own EvalStack class instead, highly unoptimized and probably buggy, but still serving our needs.

I’m particularly happy about the way the code for the ToString method ended up.

Now that we have a data structure for the evaluation stack, we can proceed to look at how to implement the functions f : S -> S' associated with each instruction in the method body. At first glance, this might seem like a gargantuan task – in fact, panic might grip your heart – as there are rather a lot of different opcodes in IL. I haven’t bothered to count them, but it’s more than 200. Luckily, we don’t have to implement unique functions for all of them. Instead, we’ll treat groups of them in bulk. At a high level, we’ll distinguish between three groups of instructions: generators, propagators and consumers.

The generators are instructions that conjure up this references out of thin air and push them onto the stack. The prime example is ldarg.0 which loads the first argument to the method and pushes it. For instance methods, the first argument is always this. In addition to ldarg.0, there are a few other instructions that in principle could perform the same task (such as ldarg n).

The propagators are instructions that can derive or pass on a this reference from an existing this reference. The dup instruction is such an instruction. It duplicates whatever is currently at the top of the stack. If that happens to be a this reference, the result will be this references at the two topmost locations in the stack.

The vast majority of the instructions, however, are mere consumers in our scenario. They might vary in their exact effect on the stack (how many values they pop and push), but they’ll never produce a this reference. Hence we can treat them generically, as long as we know the number of pops and pushes for each – we’ll just pop values regardless of whether or not there they are this references, and we’ll push zero or more non-this values onto the stack.

At this point, it’s worth considering the soundness of our approach. In particular, what happens if I fail to identify a generator or a propagator? Will the resulting code still be correct? Yes! Why? Because we’re always erring on the conservative side. As long as we don’t falsely produce a this reference that shouldn’t be there, we’re good. Failing to produce a this reference that should be there is not much of a problem, since the worst thing that can happen is that we miss a tail call that could have been rewritten. For instance, I’m not even bothering to try to track this references that are written to fields with stflda (for whatever reason!) and then read back and pushed onto the stack with ldflda.

Does this mean TailCop is safe to use and you should apply it to your business critical applications to benefit from the immense speed benefits and reduced risk for stack overflows that stems from rewriting recursive calls to loops? Absolutely not! Are you crazy? Expect to find bugs, oversights and gaffes all over the place. In fact, TailCop is very likely to crash when run on code examples that deviate much at all from the simplistic examples found in this post. All I’m saying is that the principles should be sound.

Mono.Cecil does try to make our implementation task as simple as possible, though. The OpCode class makes it almost trivial to write f-functions for most consumers – which is terrific news, since so many instructions end up in that category. Each OpCode in Mono.Cecil knows how many values it pops and pushes, and so we can easily compose each f from the primitive functions pop and push. For instance, assume we want to create the function fadd for the add instruction. Since Mono.Cecil is kind enough to tell us that add pops two values and pushes one value, we’ll use that information to compose the function fadd(s) = push(*non-this*, pop(pop(s))).

Here’s how we compose such f-functions for consumer instructions in TailCop:

Notice the two fresh variables, which are there to avoid problems related to closure modification. Eric Lippert explains the problem here and here. TL;DR is: we need a fresh variable to capture each intermediate result closure.

We’ll call the CreateConsumerF method from the general CreateF method which also handles generators and propagators. The simplest possible version looks like this:

You’ll note that I’ve only included a single generator and a single propagator! I might add more later on. The minimal CreateF version is sufficient to handle our na├»ve Add method though.

Now that we have a factory that produces f-functions for us, we’re all set to compose g-functions for each node in the CFG.

Once we have the g-function for each node, we can proceed to turn the pseudocode for processing nodes into actual C# code. In fact the C# code looks remarkably similar to the pseudocode.

We process the nodes in the CFG, starting at the root node, until the I-sets of input stacks and the O-sets of output stacks stabilize. At that point, we can determine the stack state (with respect to this references) for any given instruction in the method – including for any recursive method call. We determine whether or not it is safe to rewrite a recursive call to a loop like this:

We find the node that the call instruction belongs to, find the possible stack states at the call site, pop off any values intended to be passed as arguments to the method, and verify that we find a this reference at the top of each stack. Simple! To find the possible stack states, we need to compose the h-function for the call, but that’s easy at this point.

And with that, we’re done. Does it work? It works on my machine. You’ll have to download TailCop and try for yourself.


Chasing your tail with bytecode manipulation

Last week I was at the TDC conference in Trondheim to do a talk entitled “Bytecode for beginners”. In one of my demos, I showed how you might do a limited form of tail call elimination using bytecode manipulation. To appreciate what (recursive) tail calls are and why you might want to eliminate them, consider the following code snippet:

As you can see, it’s a simple recursive algorithm to add two (non-negative) integers together. Yes, I am aware that there is a bytecode operation available for adding integers, but let’s forget about such tedious practicalities for a while. It’s just there to serve as a minimal example of a recursive algorithm. Bear with me.

The algorithm exploits two simple facts:

1. Addition is trivial to do if one of the numbers is zero.
2. We can work our way to trivial case incrementally.

So basically we just decrement x and increment y until we run out of x, and then all we have left is y. Pretty simple.

This algorithm works really well for lots of integers, but the physical world of the computer puts a limit on how big x can be. The problem is this: each time we call Add, the .NET runtime will allocate a bit of memory known as a stack frame for the execution of the method. To illustrate, consider the addition of two small numbers, 6 + 6. If we imagine the stack frames -uh- stacked on top of each other, it might look something like this:

add-call-stack

So we allocate a total of 7 stack frames to perform the calculation. The .NET runtime will handle that just fine, but 6 is a pretty small number. In general we allocate x + 1 stack frames, and at some point that becomes a problem. The .NET runtime can only accommodate so many stack frames before throwing in the towel (where the towel takes on the physical form of a StackOverflowException).

It’s worth noting, though, that all we’re really doing in each of the stack frames leading up to Add(0, 12) is wait around for the result of the next invocation of Add to finish, and when we get that result, that’s immediately what is returned as result from the current stack frame.

This is what is known as a tail recursive call. In general, a tail call is any call in tail position, that is, any call that happens as the last operation of a method. It may be a call to the same method (as in our example) or it may be a call to some other method. In either case, we’re making a method call at a point in time where we don’t have much need for the old stack frame anymore.

It should come as no surprise, therefore, that clever people have figured out that in principle, we don’t need a brand new stack frame for each tail call. Instead, we can reuse the old one, slightly modified, and simply jump to the appropriate method. This is known as tail call optimization or tail call elimination. You can find all the details in a classic paper by the eminent Guy L Steele Jr. The paper has the impressive title DEBUNKING THE “EXPENSIVE PROCEDURE CALL” MYTH or PROCEDURE CALL IMPLEMENTATIONS CONSIDERED HARMFUL or LAMBDA: THE ULTIMATE GOTO, but is affectionately known as simply Lambda: The Ultimate GOTO (probably because overly long and complex titles are considered harmful).

In this blog post, we’ll implement a poor man’s tail call elimination by transforming recursive tail calls into loops. Instead of actually making a recursive method call, we’ll just jump to the start of the method – with the arguments to the method set to the appropriate values. That’s actually remarkably easy to accomplish using bytecode rewriting with the ever-amazing Mono.Cecil library. Let’s see how we can do it.

First, we’ll take a look at the original bytecode, the one that does the recursive tail call.

So the crucial line is at IL_0012, that’s where the recursive tail call happens. We’ll eliminate the call instruction and replace it with essentially a goto. In terms of IL we’ll use a br.s opcode (where “br” means branch), with the first instruction (IL_0000) as target. Prior to jumping to IL_0000, we need to update the argument values for the method. The way method calls work in IL is that the argument values have been pushed onto the execution stack prior to the call, with the first argument deepest down in the stack, and the last argument at the top. Therefore we already have the necessary values on the execution stack, it is merely a matter of writing them to the right argument locations. All we need to do is starg 1 and starg 0 in turn, to update the value of y and x respectively.

If we reverse engineer this code into C# using a tool like ILSpy, we’ll see that we’ve indeed produced a loop.

You may wonder where the arg_0F_0 variable comes from; I do too. ILSpy made it up for whatever reason. There’s nothing in the bytecode that mandates a local variable, but perhaps it makes for simpler reverse engineering.

Apart from that, we note that the elegant recursive algorithm is gone, replaced by a completely pedestrian and mundane one that uses mutable state. The benefit is that we no longer run the risk of running out of stack frames – the reverse engineered code never allocates more than a single stack frame. So that’s nice. Now if we could do this thing automatically, we could have the best of both worlds: we could write our algorithms in the recursive style, yet have them executed as loops. That’s where TailCop comes in.

TailCop is a simple command line utility I wrote that rewrites some tail calls into loops, as in the example we’ve just seen. Why some and not all? Well, first of all, rewriting to loops doesn’t help much for mutually recursive methods, say. So we’re restricted to strictly self-recursive tail calls. Furthermore, we have to be careful with dynamic dispatch of method calls. To keep TailCop as simple as possible, I evade that problem altogether and don’t target instance methods at all. Instead, TailCop will only rewrite tail calls for static methods. (Obviously, you should feel free, even encouraged, to extend TailCop to handle benign cases of self-recursive instance methods, i.e. cases where the method is always invoked on the same object. Update: I’ve done it myself.)

The first thing we need to do is find all the recursive tail calls.

So as you can see, there’s nothing mystical going on here. We’re simply selecting call instructions from method bodies where the invoked method is the same as the method we’re in (so it’s recursive) and the following instruction is a ret instruction.

The second (and final) thing is to do the rewriting described above.

As you can see, we consistently inject new instructions before the recursive call. There are three things to do:

1. Loop to update the argument values using starg instructions.
2. Insert the br.s instruction that will jump to the start of the method.
3. Remove the recursive call instruction as well as the ret that follows immediately after it.

That’s all there is to it. If you run TailCop on an assembly that contains the tail recursive Add method, it will produce a new assembly where the Add method contains a loop instead. Magic!

To convince ourselves (or at least make it plausible) that TailCop works in general, not just for the Add example, let’s consider another example. It looks like this:

So once again we have a tail recursive algorithm, this time to compute the sum of numbers in a list. It would be sort of nice and elegant if it were implemented in a functional language, but we’ll make do. The idea is to exploit two simple facts about sums of lists of integers:

1. The sum of the empty list is zero.
2. The sum of the non-empty list is the value of the first element plus the sum of the rest of the list.

The only complicating detail is that we use an accumulator (the result variable) in order to make the implementation tail-recursive. That is, we pass the partial result of summing along until we run out of numbers to sum, and then the result is complete. But of course, this algorithm is now just a susceptible to stack overflows as the recursive Add method was. And so we run TailCop on it to produce this instead:

And we’re golden. You’ll note that ILSpy is just about as skilled at naming things as that guy you inherited code from last month, but there you go. You’re not supposed to look at reverse engineered code, you know.


Shrink-wrapped Mkay

If you’ve been following this blog, you’ll know that there is no point in writing your own custom validation attributes for ASP.NET MVC data validation any more. Instead, you should be using Mkay, which allows you to specify arbitrary validation rules for your models in a LISP-like syntax. And now you can actually do it too, since I’ve packaged everything nicely in a nuget package.

When you install the nuget package in your ASP.NET MVC Application, you’ll find that a few things are added. First, there’s a reference to Mkay.dll, which contains the Mkay attribute as well as the code that is executed on the server. Second, in your Scripts folder, you’ll find three JavaScript files: mkay-validation.js (which contains the runtime for the client-side validation in mkay), mkay-parser.js (which is there to support the implementation of eval) and mkay-jquery.js (which hooks up the mkay runtime with unobtrusive jQuery validation). Third, there’s a code file in the App_Start folder called MkayBoot.cs. You may recall that the Mkay attribute must be associated with its own data validator class, in order to obtain the name of the property being validated. This happens inside the Kick method in the MkayBoot class. That method is invoked by means of WebActivator voodoo some time right after Application_Start in global.asax has been invoked. That way, you don’t have to bother with that yourself. For convenience, the Kick method also creates a so-called bundle of the Mkay JavaScript files.

Of course, you must remember to reference the Mkay JavaScript bundle in your view somehow, as well as the jQuery validation bundle. You might want to add them to the layout used by your view, for instance. Here’s an example:

And then you can start using Mkay for your own validation needs. Wee! World domination!


Self-referential validation in Mkay

…so I implemented eval for Mkay. That sentence doesn’t have a first half, because I couldn’t think of any good reasons for doing so. I happen to think that’s a perfectly valid reason in and by itself, but I fear that’s a minority stance. But it doesn’t really matter. The second half of the sentence is true in any case. I implemented eval for Mkay.

It might be unclear to you exactly what I mean by that, though. What I mean is that Mkay now has a function (called eval) that you can call inside an Mkay expression. That function will take another Mkay expression as a string parameter and produce a boolean result when called. That result will then be used within the original Mkay expression. Still opaque? A concrete example should make it entirely transparent.

So here we have a model that uses eval inside an Mkay expression. How does it work in practice? Have a look:

So what happens in the video is that the rule “(eval Rule)” that annotates the Value property says that you should take the content of the Rule property and interpret that as the rule that the Value property must adher to. It’s sort of like SQL injection, only for Mkay. Isn’t that nice?

The string passed to eval could of course be gleaned from several sources, not just a single property. For instance, the rule “(eval (+ “and ” A ” ” B))” creates and evaluates a validation rule by combining the string “and ” with the value of property A, a space, and the value of property B.

It’s even more amusing if you go all self-referential and Douglas Hofstadter-like, and have the value and the rule be one and the same thing. To accomplish that, all you have to do is annotate your property with “(eval .)”.

And then we can do stuff like this:

Can you do anything useful with this? Probably not. But you’ve got to admit it’s pretty cute.