Dragging a dead priest: programs and programmers in time

A fundamental challenge we face as programmers is that the world is alive, but the programs we write, alas, are not. When we deploy our dead programs to operate in the living world, it presents an immediate maintenance problem. Or, if you like, a matter of keeping up appearances.

In the movie Night on Earth, Roberto Benigni plays an exuberant and eccentric taxi driver working the night shift in Rome. You might be able to see the movie clip on YouTube. If you can, you should.

As night turns into early morning, the taxi driver picks up a priest. During the ride, the talkative driver starts confessing his many sins of the flesh to the priest – including affairs with a pumpkin, a sheep named Lola, and his brother’s wife. The priest listens in increasing disbelief and discomfort. Apparently he has a weak heart. By the time the taxi driver has finished his confessions, the poor priest is long dead.

What should the taxi driver do? He decides to try to make it look as if the priest has not in fact died, but rather just fallen asleep on a bench. Unfortunately, the priest’s body is unwieldy and heavy, which makes the task cumbersome. (As the taxi driver puts it, “he is only a priest, but he weighs enough to be a cardinal”.) He drags the priest out of his taxi, and with much effort manages to place him on a bench. But dead people are not good at sitting upright. The priest repeatedly slides down. After several tries, however, he succeeds in placing the priest in a reasonable sitting position. He immediately notices another problem: the priest’s eyes are wide open. It’s not a natural look. He tries to close his eyelids, but they spring back open. The solution? The sunglasses he has been wearing while driving taxi at night. He puts the sunglasses on the priest, and drives away. As he does so, the priest slumps to the side.

I suggest that we programmers are Roberto Benigni in this story, and that our programs play the part of the priest. We are constantly dragging our programs around, trying to make them look alive. In our case, we have the additional problem that the programs grow larger as we drag them around – as if the priest were gaining ever more weight.

The world around us constantly changes. Our programs are unable to respond and adapt to those changes by themselves. They are, after all, mere code. They are dead and have no agency. It falls on us, the programmers, to run around modifying the code as appropriate in response to the changes that occur in the world. Unfortunately, since this is a manual process and we are bound by time and economic constraints, our response is bound to be partial and imperfect. We might not immediately be aware of all relevant changes that we need to respond to. Sometimes we don’t know exactly how to respond. We might not consider it worthwhile to respond to the change. The change might be incompatible with previous choices or assumptions we have made. It might take a lot of time to make the necessary changes to the code. When there are many changes, we might lag behind.

The problem, then, is that our programs are dead structures that exist in time. By necessity, they exhibit inertia in the face of change. Effort must be expended to overcome that inertia. Effort does not only cost resources, but also takes time. And time brings with it more changes.

There are further complicating factors as well. For instance, it’s increasingly hard to clearly define what the program is and isn’t. Is it just the code in the git repository? Surely not, because it almost certainly depends on third-party components and external services as well, not to mention a hosting environment, perhaps in the cloud somewhere. In a sense, our code is just a plug-in to this much larger program – which also has structure, is affected by change, exhibits inertia, and needs external help in adapting to that change. The interplay between changes and more or less delayed responses to these changes in various parts of the conglomerate program can be chaotic.

Speaking of help: the program is interwoven with the organization that creates the program, as part of a larger socio-technical system. This system is also hard to delineate. The distinction between system and non-system is an abstraction that we will into being by collaborative illusion-making. And of course the people, too, are in a sense structures (albeit live ones), with their jobs, roles, salaries, status, group memberships, relationships, loyalties and trust. We also exhibit inertia in the face of change.

What changes brush up against and confront all this inertia? All kinds: functional requirements, non-functional requirements, vulnerabilities in third party components, fashions and trends, legislation, macro and micro economic developments, funding, people leaving and joining – anything that changes the relationship between the program (or our part of it, anyway), the organization and the world they both operate in. The world is an engine that creates constant gaps, and we must keep running to stand relatively still. Or rather to maintain our balance, since the world is spinning, never where it was a moment ago.

What happens if we can’t keep up? If the gaps between expectations and reality becomes too large, and the illusion of the living program breaks down? Is it a second death that awaits when the priest has become too heavy for us to move?


Aspects without Aspects

In the previous blog posts, we saw that we could hide the problematic concrete exception type from the C# compiler by tucking it inside a transformation from a closure of type Func<TR> to another closure of the same type. But of course we can use such transformations for many things besides exception handling. Any old behaviour that we would like to throw on top of the business logic, we can apply in layers using this approach.

This capability is so cool that I took a break from writing this blog post to share my enthusiasm with my wife. She was like, “what are you blogging about?”, and I was like “there’s this really cool thing you can do, where you apply this transformation to some method call, and then you can like, do additional stuff with it, entirely transparently!”, and she was like “like what?”, and I was like “like anything!”, and she was like “like what?”, and I was like “anything you want!”, but she was still like “like what though?” and then I turned more like “uh… uh… like say you had this method that returned a string – you could easily transform that into a method that looked exactly the same, but returned the reversed string instead”, and she was like “…the reversed string? why?” and I was like “or-or-or maybe you could return the uppercase string instead…?”, and she was like “uppercase?” with totally uppercase eyebrows and I was like “nonono! I got it! say you had this method that did a really expensive and slow computation, you could turn that into a method that would keep along the result that computation, so you didn’t have to do the actual computation all the time”, and she was like “oh, that’s cool” and I was like “phew! I’m never talking to her about blogging again!”.

So that was a close call. But yes, you can totally use this for caching. All we need is a suitable transformation thing.


public static Func<T> Caching<T>(this Func<T> f)
{
bool cached = false;
T t = default(T);
return () => {
if (cached) return t;
t = f();
cached = true;
return t;
};
}

view raw

Caching.cs

hosted with ❤ by GitHub

Here, we’re taking advantage of the fact that C# has mutable closures – that is, that we can write to cached and t from inside the body of the lambda expression.

To verify that it works, we need a suitable example – something that’s really expensive and slow to compute. And as we all know, one of the most computationally intensive things we can do in a code example is to sleep:


Func<string> q = () => {
Thread.Sleep(2000);
return "Hard-obtained string";
};
Console.WriteLine(q());
Console.WriteLine(q());
Console.WriteLine(q());
q = q.Caching();
Console.WriteLine(q());
Console.WriteLine(q());
Console.WriteLine(q());

view raw

SleepCache.cs

hosted with ❤ by GitHub

Well, what kind of behaviour should we expect from this code? Obviously, the first three q calls will be slow. But what about the three last? The three last execute the caching closure instead. When we execute the forth call, cached is false, and so the if test fails, and we proceed to evaluate the original, non-caching q (which is slow), tuck away the result value for later, set the cached flag to true, and return the computed result – the hard-obtained string. But the fifth and sixth calls should be quick, since cached is now true, and we have a cached result value to return to the caller, without ever having to resort to the original q.

That’s theory. Here’s practice:

So that seems to work according to plan. What else can we do? We’ve seen exception handling in the previous posts and caching in this one – both examples of so-called “cross-cutting concerns” in our applications. Cross-cutting concerns was hot terminology ten years ago, when the enterprise world discovered the power of the meta-object protocol (without realizing it, of course). It did so in the guise of aspect-oriented programming, which carried with it a whole vocabulary besides the term “cross-cutting concerns” itself, including “advice” (additional behaviour to handle), “join points” (places in your code where the additional behaviour may be applied) and “pointcuts” (a way of specifying declaratively which join points the advice applies to). And indeed, we can use these transformations that we’ve been doing to implement a sort of poor man’s aspects.

Why a poor man’s aspects? What’s cheap about them? Well, we will be applying advice at various join points, but we won’t be supporting pointcuts to select them. Rather, we will be applying advice to the join points manually. Arguably, therefore, it’s not really aspects at all, and yet we get some of the same capabilities. That’s why we’ll call them aspects without aspects. Makes sense?

Let’s consider wrapping some closure f in a hypothetical try-finally-block, and see where we might want to add behaviour.


// 1. Before calling f.
try {
f();
// 2. After successful call to f.
}
finally {
// 3. After any call to f.
}

So we’ll create extension methods to add behaviour in those three places. We’ll call them Before, Success and After, respectively.


public static class AspectExtensions {
public static Func<T> Before<T>(this Func<T> f, Action a) {
return () => { a(); return f(); };
}
public static Func<T> Success<T>(this Func<T> f, Action a) {
return () => {
var result = f();
a();
return result;
};
}
public static Func<T> Success<T>(this Func<T> f, Action<T> a) {
return () => {
var result = f();
a(result);
return result;
};
}
public static Func<T> After<T>(this Func<T> f, Action a) {
return () => {
try {
return f();
} finally {
a();
}
};
}
public static Func<T> After<T>(this Func<T> f, Action<T> a) {
return () => {
T result = default(T);
try {
result = f();
return result;
} finally {
a(result);
}
};
}
}

Note that we have two options for each of the join points that occur after the call to the original f closure. In some cases you might be interested in the value returned by f, in others you might not be.

How does it work in practice? Let’s look at a contrived example.


static void Main (string[] args)
{
Func<Func<string>, Func<string>> wrap = fn => fn
.Before(() => Console.WriteLine("I'm happening early on."))
.Success(r => Console.WriteLine("Successfully obtained: " + r))
.Before(() => Console.WriteLine("When do I occur???"))
.After(r => Console.WriteLine("What did I get? " + r));
var m1 = wrap(() => {
Console.WriteLine("Executing m1…");
return "Hello Kiczales!";
});
var m2 = wrap(() => {
Console.WriteLine("Executing m2…");
throw new Exception("Boom");
});
Call("m1", m1);
Call("m2", m2);
}
static void Call(string name, Func<string> m) {
Console.WriteLine(name);
try {
Console.WriteLine(name + " returned: " + m());
}
catch (Exception ex) {
Console.WriteLine("Exception in {0}: {1}", name, ex.Message);
}
Console.WriteLine();
}

So here we have a transformation thing that takes a Func<string> closure and returns another Func<string> closure, with several pieces of advice applied. Can you work out when the different closures will be executed?

We start with some closure fn, but before fn executes, the first Before must execute (that’s why we call it Before!). Assuming both of these execute successfully (without throwing an exception), the Success will execute. But before all these things, the second Before must execute! And finally, regardless of how the execution turns out with respect to exceptions, the After should execute.

In the case of m1, no exception occurs, so we should see the message “Successfully obtained: Hello Kiczales!” in between “Executing m1…” and “What did I get? Hello Kiczales!”. In the case of m2, on the other hand, we do get an exception, so the Success closure is never executed.

A screenshot of my console verifies this:

Screen Shot 2014-07-04 at 10.59.07 PM

So we’ve seen that we can do fluent exception handling, caching and aspects without aspects using the same basic idea: we take something of type Func<TR> and produce something else of the same type. Of course, this means that we’re free to mix and match all of these things if we wanted to, and compose them all using Linq’s Aggregate method! For once, though, I think I’ll leave that as an exercise for the reader.

And of course, we can transform other things besides closures as well – we can use the same approach to transform any instance of type T to some other T instance. In fact, let’s declare a delegate to capture such a generalized concept:


delegate T Decorate<T>(T t);

view raw

Decorate.cs

hosted with ❤ by GitHub

Why Decorate? Well, nothing is ever new on this blog. I’m just rediscovering old ideas and reinventing flat tires as Alan Kay put it. In this case, it turns out that all we’ve been doing is looking at the good old Decorator pattern from the GoF book in a new or unfamiliar guise.


Linq to Exceptions

Lately I’ve been thinking about the mundane task of exception handling. Remarkably unsexy, and yet completely mandatory. There’s no escape from handling the exceptions that may (and therefore will) occur in your application, unless you want it to crash or misbehave regularly.

Proper exception handling can be a chore anywhere in your application, but in particular at integration points. Whenever your application needs to communicate with some external service, any number of things can go wrong, at various levels in the stack – from application level errors to network problems. We need to be able to cope with all these different kinds of errors, and hence we surround each service call with a veritable flood of exception handlers. That sort of works, but there are problems – first, the actual service is drowning amidst all the exception handling, and second, we get a lot of code duplication, since we tend to handle many kinds of exceptions in the same way for all service calls.

I’ve been discussing how to best solve this problem in a completely general and flexible way in several impromptu popsicle- and coffee-driven sessions with some of my compatriots at work (in particular Johan and Jonas – thanks guys!). Some of the code examples in this blog post, especially those who exhibit some trace of intelligence, can legitimately be considered rip-offs, evolutions, misunderstandings or various other degenerations of similar code snippets they have come up with.

But I’m getting ahead of myself. Let’s return to the root of the problem: how exception handling code has a tendency to overshadow the so-called business logic in our applications and to cause severe cases of duplication.

The problem is minor if there is a single exception we’re catching:


try {
// Interesting code.
}
catch (SomeException ex) {
// Dull exception handling code.
}

However, things regress quickly with additional types of exceptions:


try {
// Interesting code.
}
catch (SomeException ex) {
// Dull code.
}
catch (SomeOtherException ex) {
// Dull code.
}
catch (YetAnotherException ex) {
// Dull code.
}
catch (Exception ex) {
// Dull code.
}

This is a bit messy, and the signal-to-noise-ratio is low. It gets much worse when you have a second piece of interesting code that you need to guard with the same exception handling. Suddenly you have rampant code repetition on your hands.

A solution would be to use a closure to inject the interesting code into a generic exception handling method, like this:


TR Call<TR>(Func<TR> f) {
try {
return f();
}
catch (SomeException ex) {
// Dull code.
}
catch (SomeOtherException ex) {
// Dull code.
}
catch (YetAnotherException ex) {
// Dull code.
}
catch (Exception ex) {
// Dull code.
}
}

view raw

Coupled-Call.cs

hosted with ❤ by GitHub

And then you can use this in your methods:


Foo SafeMethod1(int x, string s) {
Call(() => Method1(x, s));
}
Bar SafeMethod2(double d) {
Call(() => Method2(d));
}

view raw

UsingCall.cs

hosted with ❤ by GitHub

Adding another method is trivial:


Quux SafeMethod3() {
Call(() => Method3());
}

This works pretty nicely and solves our immediate issue. But there are a couple of limitations. In particular, three problems spring to mind:

  1. What if you want to return some legal, non-default value of type TR from one or more of the catch-blocks? As it stands now, each catch-block must either rethrow or return the default value of TR.
  2. What if there are variations in how you want to handle some of the exceptions? For instance, it may be that YetAnotherException should be handled differently for each method.
  3. What if there are slight variations between the “catching” needs for the various methods? What if you decided that SafeMethod2 doesn’t need to handle SomeOtherException, whereas SafeMethod3 should handle IdiosyncraticException in addition to the “standard” ones?
  4. As an answer to the two first problems, you could pass in each exception handler to the Call method! Then you would have a method like this:


    TR Call<TR>(
    Func<TR> f,
    Func<SomeException, TR> h1,
    Func<SomeOtherException, TR> h2,
    Func<YetAnotherException, TR> h3,
    Func<Exception, TR> h4)
    {
    try {
    return f();
    }
    catch (SomeException ex) {
    return h1(ex);
    }
    catch (SomeOtherException ex) {
    return h2(ex);
    }
    catch (YetAnotherException ex) {
    return h3(ex);
    }
    catch (Exception ex) {
    return h4(ex);
    }
    }

    view raw

    CrazyCall.cs

    hosted with ❤ by GitHub

    And at this point you’re about to stop reading this blog post, because WTF. Now your methods look like this:


    Foo SafeMethod1(int x, string s) {
    Call(() => Method1(x, s),
    ex => // Handle SomeException,
    ex => // Handle SomeOtherException,
    ex => // Handle YetAnotherException,
    ex => // Handle Exception
    );
    }

    So we’re pretty much back at square one, except it’s a bit more convoluted and confusing for the casual reader. And we still don’t have a good solution for the third problem. We could of course fake “non-handling” of SomeOtherException for SafeMethod2 by simply handing in a non-handling handler, that is, one that simply rethrows directly: ex => { throw ex; }. But that’s ugly and what about the IdiosyncraticException? Well, it’s not going to be pretty:


    Quux SafeMethod3() {
    try {
    return Call(() => Method3(),
    ex => // Handle SomeException,
    ex => // Handle SomeOtherException,
    ex => // Handle YetAnotherException,
    ex => // Handle Exception
    );
    }
    catch (IdiosyncraticException ex) {
    // Do something.
    }
    }

    Which just might be the worst code ever, and also has the idiosyncracy that the additional catch-handler will only be reached if the Exception handler rethrows. Horrible. Better, perhaps, to put it inside?


    Quux SafeMethod3() {
    return Call(() =>
    {
    try { return Method3(); } catch (IdiosyncraticException ex) { throw ex; }
    },
    ex => // Handle SomeException,
    ex => // Handle SomeOtherException,
    ex => // Handle YetAnotherException,
    ex => // Handle Exception
    );
    }

    Well yes, slightly better, but still pretty horrible, and much worse than just suffering the duplication in the first place. But maybe we’ve learned something. What we need is composition – our current solution doesn’t compose at all. we need to be able to put together exactly the exception handlers we need for each method, while at the same time avoiding repetition. The problem is in some sense the coupling between the exception handlers. What if we tried a different approach, handling a single exception at a time?

    We could have a less ambitious Call-method, that would handle just a single type of exception for a method. Like this:


    TR Call<TR, TE>(
    Func<TR> f,
    Func<TE, TR> h)
    where TE : Exception
    {
    try {
    return f();
    }
    catch (TE ex) {
    return h(ex);
    }
    }

    view raw

    LessCall.cs

    hosted with ❤ by GitHub

    Now we have a single generic exception handler h. Note that when we constrain the type variable TE to be a subclass of Exception, we can use TE in the catch clause, to select precisely the exceptions we would like to catch. Then we could write a method like this:


    Frob SafeMethod4a() {
    return Call(
    () => new Frob(),
    (NullReferenceException ex) => ... );
    }

    view raw

    SafeMethod4a.cs

    hosted with ❤ by GitHub

    What if we wanted to catch another exception as well? The solution is obvious:


    Frob SafeMethod4b() {
    return Call(
    () => SafeMethod4a(),
    (InvalidOperationException ex) => ... );
    }

    view raw

    SafeMethod4b.cs

    hosted with ❤ by GitHub

    And yet another exception?


    Frob SafeMethod4c() {
    return Call(
    () => SafeMethod4b(),
    (FormatException ex) => ... );
    }

    view raw

    SafeMethod4c.cs

    hosted with ❤ by GitHub

    You get the picture. Of course, we can collapse all three to a single method if we want to:


    Frob SafeMethod4() {
    return
    Call(() =>
    Call(() =>
    Call(() =>
    FrobFactory.Create(),
    (NullReferenceException ex) => ... ),
    (InvalidOperationException ex) => ... ),
    (FormatException ex) => ... );
    }

    view raw

    SafeMethod4.cs

    hosted with ❤ by GitHub

    What have we gained? Well, not readability, I’ll admit that. But we’ve gained flexibility! Flexibility goes a long way! And we’ll work on the readability shortly. First, though: just in case it’s not clear, what we’ve done is that we’ve created an exception handling scenario that is similar to this:


    Frob TraditionalSafeMethod4() {
    try {
    try {
    try {
    return FrobFactory.Create();
    }
    catch (NullReferenceException ex) { ... handler code ... }
    }
    catch (InvalidOperationException ex) { ... handler code ... }
    }
    catch (FormatException ex) { ... handler code ... }
    }

    So there’s nothing very complicated going on here. In fact, I bet you can see how similar the two methods really are – the structure is identical! All we’ve done is replace the familiar try-catch construct with our own Call-construct.

    As an aside, we should note that the composed try-catch approach has slightly different semantics than the sequential, coupled try-catch approach. The difference in semantics is due to decoupling provided by the composed try-catch approach – each catch-block is completely independent. Therefore, there is nothing stopping us from having multiple catch-handlers for the same type of exception should we so desire.

    Now, to work on the readability a bit. What we really would like is some way to attach catch-handlers for various exception types to our function call. So assuming that we wrap up our original function call in a closure using a delegate of type Func<TR>, we would like to be able to attach a catch-handler for some exception type TE, and end up with a new closure that still has the type Func<TR>. Then we would have encapsulated the exception handling completely. Our unambitious Call-method from above is almost what we need, but not quite. Instead, let’s define an extension method on the type that we would like to extend! Func<TR>, that is:


    static class CatchExtensions
    {
    static Func<TR> Catch<TR, TE>(
    this Func<TR> f,
    Func<TE, TR> h)
    where TE : Exception
    {
    return () => {
    try {
    return f ();
    } catch (TE ex) {
    return h (ex);
    };
    };
    }
    }

    view raw

    Ext.Catch.cs

    hosted with ❤ by GitHub

    So the trick is to return a new closure that encapsulates calling the original closure and the exception handling. Then we can write code like this:


    Frob ExtSafeMethod4() {
    Func<Frob> it = () => FrobFactory.Create();
    var safe =
    it.Catch((NullReferenceException ex) => ... )
    .Catch((InvalidOperationException ex) => ... )
    .Catch((FormatException ex) => ... );
    return safe();
    }

    Now the neat thing is that you can very easily separate out the catch-handler-attachment from the rest of the code:


    Frob ExtSafeMethod4b() {
    var safe = Protect(() => FrobFactory.Create);
    return safe();
    }
    Func<TR> Protect<TR>(Func<TR> it) {
    return
    it.Catch((NullReferenceException ex) => ... )
    .Catch((InvalidOperationException ex) => ... )
    .Catch((FormatException ex) => ... );
    }

    So we have essentially created a fluent interface for attaching catch-handlers to a method call. The cool thing is that it is trivial to attach additional exception handlers as needed – and since we do so programmatically, we can even have logic to control the attachment of handlers. Say we discovered that we needed to catch WerewolfExceptions when the moon is full? No problem:


    Func<Frob> WolfProof() {
    var f = Protect(() => FrobFactory.Create());
    if (IsFullMoon()) {
    f = f.Catch((WerewolfException ex) => ... ); // silver bullet?
    }
    return f;
    }

    view raw

    WolfProof.cs

    hosted with ❤ by GitHub

    In my eyes, this is pretty cool. You might be running away screaming, thinking I’m crazy and that with this approach, you’ll never know which exceptions you’re actually catching anymore. You could be right. Opinions differ.

    But that’s OK. All I’m doing is providing an alternative approach to the handling of multiple exception – one that I think offers increased power and flexibility. I’m not saying you should take advantage of it. With greater power comes greater responsibility and all that.

    And besides, we still haven’t talked about Linq. An alternative (and attractive) solution to our current fluent interface is to attach a sequence of catch-handlers at once. Something like this:


    // Unfortunately, this won't compile.
    Func<TR> Protect<TR>(Func<TR> it) {
    return
    it.CatchAll(
    (NullReferenceException ex) => ... ,
    (InvalidOperationException ex) => ... ,
    (FormatException ex) => ... );
    }

    However, it’s surprisingly difficult to provide a suitable type for that sequence of catch-handlers – in fact, the C# compiler fails to do so! The problem is that delegates are contravariant in their parameters, which means that a delegate D1 is considered a subtype of delegate D2 if the parameters of D1 are supertypes of the parameters of D2. That’s all a bit abstract, so perhaps an example will help:


    Action<object> d1 = (object o) => {};
    Action<string> d2 = (string s) => {};
    d1 = d2; // This won't compile.
    d2 = d1; // This is OK.

    view raw

    FuncObject.cs

    hosted with ❤ by GitHub

    To make sense of the abstract description above, assume that D1 is Action<object> and D2 is Action<string>. Since the D1 parameter (object) is a supertype of the D2 parameter (string), it follows that D1 is a subtype of D2 – and not the other way around, as we might have guessed. This is why the C# compiler won’t let us assign a D2 instance to a D1 reference.

    The implication is that the C# compiler will fail to find a type that will reconcile the catch handlers above. In particular, due to the contravariance of delegate parameters, we cannot type the sequence as Func<Exception, TR>, since neither Func<NullReferenceException, TR>, nor Func<InvalidOperationException, TR>, nor Func<FormatException, TR> are assignable to Func<Exception, TR>. It would go the other way around: we could assign a Func<Exception, TR> to all three of the other types, but which one should the compiler pick? If it (arbitrarily) picked Func<NullReferenceException, TR>, clearly it wouldn’t work for the two other delegates – and all other choices have the same problem.

    So we’re stuck. Sort of. The only solution we have is to hide the exception type somehow, so that we don’t have to include the exception type in the type of the sequence. Now how do we do that? Well, in some sense, we’ve already seen an example of how to do that: we hide the exception handling (and the type) inside a closure. So all we need is some way to convert an exception handler to a simple transformation function that doesn’t care about the type of the exception itself. Like this:


    Func<Func<TR>, Func<TR>> Encapsulate<TR, TE>(Func<TE, TR> h)
    where TE : Exception
    {
    return f => () =>
    {
    try {
    return f();
    }
    catch (TE ex) {
    return h(ex);
    };
    };
    }

    view raw

    Encapsulate.cs

    hosted with ❤ by GitHub

    So what is this thing? It’s a method that encapsulates the catch-handler inside a closure. This closure will take as input a closure of type Func<TR> and produce as output another closure of type Func<TR>. In the process, we have hidden the type TE, so that the C# compiler doesn’t have to worry about it anymore: all we have is a thing that will transform a Func<TR> to another Func<TR>.

    So now we can sort of accomplish what we wanted, even though it’s less than perfect.


    Func<TR> Protect<TR>(Func<TR> it) {
    return
    it.CatchAll(
    Encapsulate((NullReferenceException ex) => ...),
    Encapsulate((InvalidOperationException ex) => ...),
    Encapsulate((FormatException ex) => ... ));
    }

    But now we can have some fun using Linq’s Aggregate method to compose our exception handlers. So we might write code like this:


    var catchers = new [] {
    Encapsulate((ArgumentException x) => x.Message),
    Encapsulate((InvalidOperationException x) => { Log(x.Message); throw x; },
    Encapsulate((NullReferenceException x) => "Uh oh")
    };
    var protect = catchers.Aggregate((acc, nxt) => thing => nxt(acc(thing)));
    var f = protect(() => FetchStringSomewhere());
    var s = f();

    The cool part is obviously the Aggregate call, where acc is the “accumulated” composed closure, nxt is the next encapsulated exception handler and thing is the thing we’re trying to protect with our exception handlers – so in other words, the closure that contains the call to FetchStringSomewhere.

    And of course we can now implement CatchAll if we want to:


    static class CatchExtensions
    {
    Func<TR> CatchAll<TR>(
    this Func<TR> f,
    params Func<Func<TR>, Func<TR>>[] catchers)
    {
    var protect = catchers.Aggregate((acc, nxt) => thing => nxt(acc(thing)));
    return protect(f);
    }
    }

    Now please, if you are Eric Lippert and can come up with code that proves that I’m wrong with respect to the typing of sequences of exception handler delegates – please let me know! I would very much like to be corrected if that is the case.


The hawk and the tower

Book-tower-w400

Behold the tower of good intentions!

The stack of books shown in the picture is my accumulated backlog of unread books. (Of course, in terms of data structures, it’s not really a stack, it’s more like a priority queue. Although the priority tends to be skewed somewhat towards the most recently purchased books.)

As you can see, the tower is made up of completely awesome, jaw-drop-inducing books. (You can browse them here if you’re interested.) I’m quite convinced that there are no better books out there — except perhaps Knuth, but we already discussed that. (Also, I wasn’t able to locate my copy of Jon Bentley’s Programming Pearls for the picture, otherwise it would be in there somewhere.) And yet I haven’t read any of them! That is, I may have read the introduction, maybe the first chapter, in some cases a bit more. But I don’t think that I’ve read more than 10% of either one.

Now the weird thing is that somewhat pathetically, I’ve always associated a bit of pride with my tower of books – as if merely owning these great books would somehow cause some of their greatness to rub off on me. The notion is clearly absurd – it makes no sense – but that’s the way it has been. Lately though, I can’t really look at the tower without thinking of my favorite Whitman quote, from Song of Myself:

The spotted hawk swoops down and accuses me, he complains of my gab and my loitering.

Gab and loitering indeed! I read about books, I purchase books, I write blog posts about books – what about actually reading them?

Clearly the feeling of pride is inappropriate and unfounded. You take pride in what you’ve done, not what you may or may not do in the future. What does the tower represent? The opposite of action! The absence of accomplishment! It’s a monument over things I haven’t done!

The best thing you can say about the tower is that it shows some ambition and awareness – at least I am knowledgable enough to recognize great books when I see them! I guess that’s something. A morsel. And of course the tower represents potential energy in the sense of potential learning. I have a well of knowledge available, right in front of me, that I can tap into any time I want to. Finally, the tower arguably says something about who I would like to be, what I would like to know. For instance, it is easy by glancing at the tower to infer that I have an interest in functional programming in general and Lisp(s) in particular. That’s good, I suppose – I feel OK about that.

What appears to be lacking, though, is commitment, focus, and getting things done – in particular getting books read! This has deeper repercussions as well, because it casts a shadow of doubt over the proclaimed ambition. If I really wanted to learn all these things, shouldn’t I be reading my books?

Let’s not jump to any conclusions just yet, though. First, let’s hear from the defense. What is keeping me from reading?

Well, there are two primary impediments that come to mind: time and discomfort.

Time is pretty simple. I have a family, hence I don’t have time. Or rather, time is always a scarce resource. After the kids have gone to bed, I have 2-3 hours available to do “things”. For the week in total, I’d say the number of free hours oscillates between 10 and 20. Reading books now has to compete with any number of other activities, both mandatory (doing laundry) and optional (watching a movie) and in-between (spending time with my wife), for a slice of that time. Hence there are limits to the amount of time I both can and am willing to put into reading. The builder of the tower, on the other hand, isn’t aware of time – he just tends to purchase all the books he wants to read. So there’s a gap there. It’s not at all obvious that the time I have available will be nearly sufficient to consume books as quickly as I buy them. In fact, let’s investigate it a little bit by doing a bit of trivial math. Methinks the math will speak volumes about the realism of working my way through the tower.

For instance, say I want to learn Python in greater depth. I decide to work through Zed Shaw’s Learn Python The Hard Way (which is not in my book tower, but it is on my wish list! Oh dear.) It seems like a reasonable way to go. Now, LPTHW consists of 52 chapters. That means that if I work through one chapter per week, that’s an entire year to work through that one book. Obviously I could cut that down to half a year by doing two chapters peer week instead, but I would have to double my reading budget. I could cut it down to three months, but then I’d have to quadruple it. Am I willing to do that? I’m not sure, because truth be told, I don’t actually have a reading budget. So I can’t really answer those questions meaningfully. (I guess that’s an improvement point right there. I should totally make a reading budget. (And a laundry budget. And a movie budget. And a wife budget? Don’t think I’ll get away with that.)) Still, it’s fairly obvious that I have to prioritize quite heavily which books I really want to read – and that working my way through the even parts of the tower is going to take years. Might as well come to terms with that.

And that’s pretty much it for time. Make a budget, prioritize accordingly. The budget cannot be made arbitrarily small and still be meaningful, though. When I read a book, each sentence leaves a rather soft and volatile imprint in my memory. It will get wiped out relatively quickly if I don’t keep at it. There’s a critical mass of sustained reading necessary in order to keep my mind in the book, so to speak. It’s like riding a wave. If I don’t keep with the flow, I will fall off with a splash. Then I will have to backtrack and re-read and try to ride the next wave. If the pattern repeats too many times, I’ll have to start over at page one. Also, the critical reading mass depends on the subject matter – the more complex it is, the more information I need to keep in my mind at the same time, and hence the more intense and sustained reading required to stay abreast.

And that is it for time. Time is pretty simple. Just make sure that the reading budget is sufficient for riding the wave. The second impediment, discomfort, is much more – uh – discomforting.

You see, a common trait among the books in the tower is that they entail learning. The challenge is that any significant act of learning involves some amount of discomfort. When learning something non-trivial, something actually worth learning, there will be resistance. There will be things I don’t understand, at least not immediately – perhaps I may need to read, re-read and re-read again in order to come anywhere near grasping it. That can be frustrating and painful.

The feeling of discomfort is amplified by the fact that my brain is getting older and a bit rusty. The neurons are behaving increasingly anti-socially, they’re grumpy and less willing to make new associations than they used to be. Perhaps they’ve been hanging out with me for too long, I may have a had a bad influence. Anyways, a less flexible brain means even more discomfort and harder work in order to learn something new.

This brings me to the scary part, which I call topic skipping. The problem is that the discomfort of reading a book that requires learning kicks in exactly when introductions are over with, and the material starts offering genuine resistance. (You’ll recall that I’ve read up to 10% of all the books in the tower.) At that point, it’s all too easy to jump ship, abandon the book, and start over with something new. That’s a pretty pathological pattern. In a way, it resembles what is known as disk thrashing; a term used to describe a hard drive that is overworked by moving information between the system memory and virtual memory excessively.

Now according to Wikipedia, that trustworthy workhorse of information that may or may not be trustworthy, a user can do the following to alleviate disk thrashing:

  • Increase the amount of RAM in the computer.
  • Adjust the size of the swap file.
  • Decrease the amount of programs being run on the computer.

In terms of reading, this translates roughly to increasing the size of your brain (a good idea, but requires significant advances in medicine), increasing the amount of time available for reading (I’d like to, but cannot conjure up any more hours in the day) and decreasing the number of books you’re trying to read at once (excellent advice!).

The main problem with both disk thrashing and topic skipping is waste. You appear to be working, but you’re really just spending your time changing topics. Given that time is a scarce resource, it makes no sense to waste it like that. It would be much better if I would just harden up, endure the discomfort of feeling stupid, and resist the temptation of starting over on some new and shiny topic.

So there you have it. Time and discomfort. That’s why I’m not reading books fast enough, that’s why reading efforts often get abandoned after the first chapter, and that’s why my stack of unread books is growing into a veritable tower of Babel. Some defense! Turns out I’m a busy wimp! I’m afraid the spotted hawk won’t be too impressed!

I still have a teeny tiny trick up my sleeve, though. I believe commitment is the antidote to both gab and loitering, and it turns out that involving other people does wonders for creating commitment. So I’m teaming up with some compatriots from work to form a book circle. First book up is SICP which, coincidentally, you’ll find residing at the top of the tower. So there is hope! Which is good, because I totally need to make room for this awesome-looking book which discusses multi-paradigm programming using an obscure language called Oz. How’s that for shiny!


To Knuth or not to Knuth

I received an email with an interesting question a while back. The question was this:

Dear doctor, should I read Knuth?

As you can see, the email was deviously crafted to appeal to my vanity – which of course was a clever thing to do, since it spurred one of those lengthy, slightly self-absorbed responses I’m wont to. However, it also triggered an unexpected chain of events which is now resulting in this blog post. You see, the flattery inflated my ego to the degree that I figured I should follow Scott Hanselman’s advice, and maximize the effectiveness and reach of my keystrokes. I am therefore publishing my response here to the vast readership of my blog. That means you, dear reader! Nothing like a bit of old-fashioned hubris!

So, should you read Knuth? I dunno.

It’s an interesting question, though, with some pretty deep repercussions. In fact, it is interesting enough that Peter Seibel included it as one of the questions he asked each of the prominent programmers he interviewed for the book “Coders at Work“. You should totally read that book, by the way.

Anyways. In case you’re unfamiliar with Knuth, we’re talking about this guy:

Knuth

Professor Donald Knuth

Knuth received the ACM Turing award in 1974 (a year before I was born) for his work on the analysis of algorithms and data structures. That work is captured in the first three volumes of a series of books known as The Art of Computer Programming or TAOCP for short. “Reading Knuth” refers to reading those books.

Of course, that’s a pretty significant argument for reading Knuth right there. The content of the books is worth a Turing award! I bet you don’t have many books in your bookshelf for which that is true. I don’t, and I have awesome books! TAOCP is the quintessential, be-all end-all resource for the study of algorithms. If you are familiar with the so-called big-oh notation for estimating run-time costs of algorithms, that’s due to Knuth’s influence.

One of the quirks of Knuth is that apparently all the examples in the TAOCP use a language called “MIX assembly language”. It runs on a hypothetical computer called “MIX”, dreamed up by Knuth. Clearly, then, the examples are not directly transferable to your day-to-day work as a programmer. And undoubtedly, it can be somewhat off-putting for modern readers to have to learn everything by means of this ancient assembly language for an obsolete machine that never existed. To be fair, Knuth has since come up with MMIX, a “64-bit RISC architecture for the third millenium” (no less), intended to replace the original MIX in TAOCP. I’m not quite sure how far the work of upgrading MIX to MMIX has progressed, though. Also I’m not quite sure that it really rectifies the situation. But YMMV.

A further obstacle is that TAOCP requires significant mathematical sophistication on behalf of its readers. It speaks volumes to me when Guy L. Steele Jr claims he lacks the mathematical background to read parts of it. What about us mere mortals, then?

These challenges conspire to raise some serious doubt: is reading Knuth worth the effort? Clearly it requires a lot of intellectual struggle and perseverance. Isn’t it simply too much trouble to go through? At the same time, the challenges contribute to a further selling point for Knuth: TAOCP as the ultimate rite-of-passage for programmers. Arguably you’ve beaten the game of programming and computers in general when you’ve read Knuth.

So then: do I think you should read Knuth? I dunno.

I feel like it’s an important question to try to answer, though. It’s kind of amusing, really – it reminds me of the opening of “The Myth of Sisyphus” by Albert Camus, 20th century French writer and philosopher. It was one of my favorite books way back when I was young and oh-so much older that I am now. I would be sitting in a cafe, reading French literature and aiming for an air of profoundness and mystery by occasionally raising my head and gazing at the world through faraway eyes. It attracted less girls than you’d think.

Anyways, the opening goes like this:

There is but one truly serious philosophical problem, and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy. All the rest – whether or not the world has three dimensions, whether the mind has nine or twelve categories – comes afterwards. These are games; one must first answer. And if it is true, as Nietzsche claims, that a philosopher, to deserve our respect, must preach by example, you can appreciate the importance of that reply, for it will precede the definitive act. These are facts the heart can feel; yet they call for careful study before they become clear to the intellect.

Isn’t it just grand? Who cares about girls when you can read stuff like that.

Translated to the Knuth dilemma: if I reach the conclusion that you should read Knuth, I immediately have a problem: it follows suite that I should proceed to read Knuth as well! (Indeed, how can I claim to give a meaningful and convincing recommendation for or against it without actually having read the books first?)

So then: should I read Knuth? I dunno.

So far in my career, I’ve sort of dogded the issue of Knuth and TAOCP. I’ve avoided ordering the books despite their obvious merits because I’ve been quite convinced that they would just be sitting on a bookshelf collecting dust and radiating failure. As long as I don’t actually have the books in my possession, I don’t feel that bad about not having read them. But if I were to own the books, and I still didn’t read them, I would actively have not-read Knuth. Which is clearly a lot worse. Hence I’ve backed out of the challenge thus far by cunningly avoiding purchase. But of course that has more to do with my knowing a thing or two about my innate laziness and resistance to discomfort than anything else. It’s not a good reason to avoid Knuth, but at least a truthful one.

But that’s me, not necessarily you. What about you? Should you read Knuth? I dunno.

In my mind, reading Knuth is similar to choosing to walk three years by foot to some remote location in the mountains, in order to study with some famous zen monk (a thought that is somehow made more amusing by the fact that Knuth bears a certain resemblance to Yoda). Once you’ve made up your mind to study with this monk, you don’t really question why he does things this way or that way – you just do as you’re told. Wax on, wax off, if you like.

Paro-taktsang-tigers-nest-monastery-bhutan-500x375

Taktshang – the Tiger’s Nest monastery

Of course, there is a underlying assumption that you will, at some point, become enlightened. The invisible veil that hindered your sight before (although you were not aware of it) will fall from your eyes, and everything will become clear. But you don’t ask yourself when or how this will happen. I believe this is the attitude with which you need to approach Knuth in general, and the MIX architecture in particular.

However. While I am quite convinced that you will be enlightened should you seek Master Knuth in his monastery, I am equally convinced that there are other paths that lead to enlightenment as well.

In the end, I suppose it comes down to with which monk you want to study. What are you looking for? What is the nature of the enlightenment you seek? If the answer is that you want the most profound understanding possible of how algorithms and data structures correlate with the detailed workings of the the physical machine, you should read Knuth. If the answer is something else, well, perhaps you should read something else.

For my part, I’m not sure I won’t at some point in the future, in fact, walk the long, winding, barefoot road to Knuth. First, though, I want to be a wizard’s apprentice in the school of Abelson and Sussman. There I can study the problems of calculation and abstraction free from the tethering of the underlying hardware. It seems like so much more fun, to be honest. That may be an immature grounds for making such an important decision, but there you go. I’m so much younger now that I’m older, I can afford those kinds of transgressions.