Self-referential validation in Mkay

…so I implemented eval for Mkay. That sentence doesn’t have a first half, because I couldn’t think of any good reasons for doing so. I happen to think that’s a perfectly valid reason in and by itself, but I fear that’s a minority stance. But it doesn’t really matter. The second half of the sentence is true in any case. I implemented eval for Mkay.

It might be unclear to you exactly what I mean by that, though. What I mean is that Mkay now has a function (called eval) that you can call inside an Mkay expression. That function will take another Mkay expression as a string parameter and produce a boolean result when called. That result will then be used within the original Mkay expression. Still opaque? A concrete example should make it entirely transparent.

public class Guardian
{
public string Rule { get; set; }
[Mkay("eval Rule")]
public string Value { get; set; }
}
view raw Guardian.cs hosted with ❤ by GitHub

So here we have a model that uses eval inside an Mkay expression. How does it work in practice? Have a look:

So what happens in the video is that the rule “(eval Rule)” that annotates the Value property says that you should take the content of the Rule property and interpret that as the rule that the Value property must adher to. It’s sort of like SQL injection, only for Mkay. Isn’t that nice?

The string passed to eval could of course be gleaned from several sources, not just a single property. For instance, the rule “(eval (+ “and ” A ” ” B))” creates and evaluates a validation rule by combining the string “and ” with the value of property A, a space, and the value of property B.

public class Composed
{
public string A { get; set; }
public string B { get; set; }
[Mkay("eval (+ \"and \" A \" \" B)")]
public string Value { get; set; }
}
view raw Composed.cs hosted with ❤ by GitHub

It’s even more amusing if you go all self-referential and Douglas Hofstadter-like, and have the value and the rule be one and the same thing. To accomplish that, all you have to do is annotate your property with “(eval .)”.

public class Self
{
[Mkay("eval .")]
public string Me { get; set; }
[Mkay("eval .")]
public string Too { get; set; }
}
view raw Self.cs hosted with ❤ by GitHub

And then we can do stuff like this:

Can you do anything useful with this? Probably not. But you’ve got to admit it’s pretty cute.


Mkay: One validation attribute to rule them all

If you’ve ever created an ASP.NET MVC application, chances are you’ve used data annotation validators to validate user input to your models. They’re nice, aren’t they? They’re so easy to use, they’re almost like magic. You simply annotate your properties with some declarative constraint, and the good djinns of the framework provide both client-side validation and server-side validation for you. Out of thin air! The client-side validation is implemented in JavaScript and gives rapid feedback to the user during data entry, whereas the server-side validation is implemented in .NET and ensures that the data is valid even if the user should circumvent the client-side validation somehow (an obvious approach would be to disable JavaScript in the browser). Magical.

The only problem with data annotation validators is that they are pretty limited in their semantics. The built-in validation attributes only cover a few of the most common, basic use cases. For instance, you can use the [Required] attribute to ensure that a property is given a value, the [Range] attribute to specify that the property of a value must be between two constant values, or the [RegularExpression] attribute to specify a regular expression that a string property must match. That’s all well and good, but not really suited for sophisticated validation. In case you have stronger constraints or invariants for your data model, you must reach for one of two solutions. You can use the [Remote] attribute, which allows you to do arbitrary validation at the server. In that case, however, you’re doing faux-client-side validation. What really happens behind the scenes is that you fire off an AJAX call to the server. The alternative is to implement your own custom validation attribute, and write validation logic in both .NET and in JavaScript. However, that quickly becomes tiresome. While your custom attribute certainly can do arbitrary model validation, you’ve ended up doing the work that the djinns should be doing for you. There is no magic any more, there is just grunt work. The spell is broken.

Wouldn’t it be terribly nifty if there were some way to just express your sophisticated rules and constraints declaratively as intended, and have someone else do all the hard work? That’s what I thought, too. At this point, you shouldn’t be terribly surprised to learn that such a validation attribute does, in fact, exist. I’ve made it myself. The attribute is called Mkay, and supports a simple rule DSL for expressing pretty much arbitrary constraints on property values using a LISP-like syntax. Why LISP, you ask? For three obvious reasons:

  1. LISP syntax is super-easy to parse.
  2. Any excuse to shoe-horn LISP-like expressions into a string is a good one.
  3. LISP syntax is super-easy to parse.

So that’s the syntax, but exactly what kinds of rules can you express using the Mkay rule DSL? Well, that’s pretty much up to you – that’s the whole point, after all. In an Mkay expression, you can have constant values, property access (to any property on the model), logical operators (and and or), comparison operators (equality, greater-than, less-than and so forth), arithmetic, and a handful of simple functions (such as len for string length, max for selecting max value, now for the current time etc). It’s not too hard to extend it to support additional functions, but obviously they must then be implemented/wired up in JavaScript and .NET code. A contrived example should make it clearer:


public class Person
{
[Mkay("(< (len .) 5)", ErrorMessage = "That's too long, my friend.")]
public string Name { get; set; }
[Mkay("(>= . \"31.01.1900\")")]
public DateTime BirthDate { get; set; }
[Mkay("(<= . (now))")]
public DateTime DeathDate { get; set; }
[Mkay("(and (>= . BirthDate) (<= . DeathDate))")]
public DateTime LastSeen { get; set; }
}

view raw

Person.mkay.cs

hosted with ❤ by GitHub

In case it’s not terribly obvious, the rules indicate that the length of the Name property must be less than 10, the BirthDate must be later than 31.01.1900, the DeathDate must be before the current date, and LastSeen must understandably be some time between birth and death. Makes sense?

If you’ve never seen a LISP expression before, note that LISP operators are put in front of, rather than in between, the values they operate on. This is known as prefix notation as opposed to infix notation (or postfix notation, where the operator comes at the end). An expression like “(< . (now))” should be interpreted as “the value of the DeathDate property should be less than the value of (now)”. From this, you might deduce (correctly) that “.” is used as shorthand for the name of the property being validated. This is the first of three spoonfuls of syntactic sugar employed by Mkay to simplify the syntax for validation rules. The second spoonful is implicit “.” for comparisons, which means that you can actually write “(< (now))” instead of “(< . (now))”. And finally, the third spoonful lets you drop the outermost parentheses, so you end up with “< (now)”. Using simplified syntax, the example looks like this:


public class Person
{
[Mkay("< (len .) 5", ErrorMessage = "That's too long, my friend.")]
public string Name { get; set; }
[Mkay(">= \"31.01.1900\"")]
public DateTime BirthDate { get; set; }
[Mkay("<= (now)")]
public DateTime DeathDate { get; set; }
[Mkay("and (>= BirthDate) (<= DeathDate)")]
public DateTime LastSeen { get; set; }
}

So you can see that the sugar simplifies things quite a bit. Hopefully you’ll also agree that 1) the rules expressed in Mkay are more sophisticated than what the built-in validation attributes support, and 2) you’re pretty much free to write your own arbitrary rules using Mkay, without ever having to write a custom validation attribute again. The magic is back!

However, since this is a technical blog, let’s take a look under the covers to see how things work.

The crux of doing your own custom validation is to create a validation attribute that inherits from ValidationAttribute and implements IClientValidatable. Inheriting from ValidationAttribute is what lets us hook into the server-side validation process, whereas implementing IClientValidatable gives us a chance to send the necessary instructions to the browser to enable client-side validation. We’ll look at both of those things in turn. For now, though, let’s just concentrate on creating an instance of the validation attribute itself. In Mkay, the name of the validation attribute is MkayAttribute. No surprises there.


[AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)]
public class MkayAttribute : ValidationAttribute, IClientValidatable
{
private readonly string _ruleSource;
private readonly string _defaultErrorMessage;
private readonly Lazy<ConsCell> _cell;
public MkayAttribute(string ruleSource)
{
_ruleSource = ruleSource;
_defaultErrorMessage = "Respect '{0}', mkay?".With(ruleSource);
_cell = new Lazy<ConsCell>(() => new ExpParser(ruleSource).Parse());
}
protected ConsCell Tree
{
get { return _cell.Value; }
}
public IEnumerable<ModelClientValidationRule> GetClientValidationRules(
ModelMetadata metadata, ControllerContext context) …
protected override ValidationResult IsValid(
object value, ValidationContext validationContext) …
}

The MkayAttribute constructor takes a single string parameter, which is supposed to contain a valid Mkay expression. Things will blow up at runtime if it doesn’t. The ExpParser class is responsible for parsing the Mkay expression into a suitable data structure known as an abstract syntax tree; AST for short. This happens lazily whenever someone tries to access the AST, which in practice means in the GetClientValidationRules and IsValid methods.

Due to LISP envy, ExpParser (simple as it is) uses lists and atoms as building blocks for the AST. Atoms represent simple things, such as a constant value (such as 10), the name of a property (such as BirthDate) or a symbol representing some operation (such as >). Lists are simply sequences of things, that is, sequences of lists and atoms. In Mkay, lists are built from so-called cons cells which are linked together in a chain. Each cons cell consists of two things, the first of which may be considered the content of the cell (a list or an atom), and the second of which is a reference to another cons cell or a special thing called Nil. So for instance, the Mkay expression “(< (len .) 5)” is represented by the following AST:

Cons-cells

Once we have obtained the Mkay AST, we can use it to drive the client-side and server-side validations. This happens by subjecting the original AST to a two-pronged transformation process, to create two new, technology-specific AST’s: a .NET expression tree for the server-side validation code and a JSON structure for the client-side validation code. At the server side, the expression tree is compiled at runtime into a validation function that is immediately applied. The JSON structure, on the other hand, is sent to the browser where the jQuery validation machinery picks it up, and hands it over to what is essentially a validation function factory. So there’s code generation there, too, in a way, but it happens in the browser. Conceptually, the process looks like this:

Mkay-overview

Let’s look at server-side validation first. To participate in server-side validation, the MkayAttribute overrides the IsValid method inherited from ValidationAttribute. The implementation looks like this:


protected override ValidationResult IsValid(object value, ValidationContext validationContext)
{
var subject = validationContext.ObjectInstance;
var memName = validationContext.MemberName;
if (memName == null)
{
throw new Exception(
"Property name is not set for property with display name " + validationContext.DisplayName
+ ", you should register the MkayValidator with the MkayAttribute in global.asax.");
}
var validator = CreateValidator(subject, memName, Tree);
return validator()
? ValidationResult.Success
: new ValidationResult(ErrorMessage ?? _defaultErrorMessage);
}
private static Func<bool> CreateValidator(object subject, string property, ConsCell ast)
{
var builder = new ExpressionTreeBuilder(subject, property);
var viz = new ExpVisitor<Expression>(builder);
ast.Accept(viz);
var exp = viz.GetResult();
var validator = builder.DeriveFunc(exp).Compile();
return validator;
}

As you can see, the method is passed two parameters, an object called value and a ValidationContext called context. The value parameter is the value of the property we’re validating. The ValidationContext provides – uh – context for the validation, such as a reference to the full object. That’s a good thing, otherwise we wouldn’t be able to access the values of other properties and our efforts would be futile! However, we’re not entirely out of trouble – for some reason, there is no easy way to obtain the name of the property value belongs to! I presume it’s just a silly oversight by the good ASP.NET MVC folks. In fact, there is actually a MemberName property on the ValidationContext, but it is always null! There is a DisplayName which is populated, but that doesn’t have to be unique and hence isn’t a reliable pathway back to the actual parameter.

So what to do, what to do? A brittle solution to this very surprising problem would be to use reflection to flip through stack frames and figure out which property the current instance of the MkayAttribute was used to annotate. I’m sure I could get it to work most of the time. However, there’s a much simpler solution. Since ASP.NET MVC is open source, we can quite literally go to the source to find the root of the problem. In doing so, we find that the problem can be traced back to the Validate method in the DataAnnotationsModelValidator class. For whatever reason, the ValidationContext.MemberName property is not set, even though it would be trivial to do so (like, right before or after DisplayName is set). Luckily, ASP.NET MVC is thoroughly configurable, and so it is entirely possible to substitute your own DataAnnotationsModelValidator for the default one. So that’s what Mkay does:


public class MkayValidator : DataAnnotationsModelValidator<MkayAttribute>
{
public MkayValidator(ModelMetadata metadata, ControllerContext context, MkayAttribute attribute)
: base(metadata, context, attribute)
{}
public override IEnumerable<ModelValidationResult> Validate(object container)
{
var context = new ValidationContext(container ?? Metadata.Model, null, null)
{
DisplayName = Metadata.GetDisplayName(),
MemberName = Metadata.PropertyName
};
var result = Attribute.GetValidationResult(Metadata.Model, context);
if (result != ValidationResult.Success)
{
yield return new ModelValidationResult { Message = result.ErrorMessage };
}
}
}

Finally, the ASP.NET MVC application must be configured to use the replacement validator class. This happens in the Application_Start method in the MvcApplication class, aka global.asax:


public class MvcApplication : System.Web.HttpApplication
{
protected void Application_Start()
{
// … code omitted …
DataAnnotationsModelValidatorProvider.RegisterAdapter(
typeof(MkayAttribute),
typeof(MkayValidator));
}
}

view raw

global.asax.cs

hosted with ❤ by GitHub

In case you forget to wire up the custom validator (which would be terribly silly of you), you’ll find that Mkay throws an exception complaining that the name of the property to validate hasn’t been set.

So we’re back on track, and we know the name of the property we’re trying to validate. Now all we have to do is to somehow transform the AST we built into .NET validation code that we can execute. To do so, we use expression trees. Expression trees allows us to programmatically build a data structure representing .NET code, and then magically transform it into executable code.

We use the venerable visitor pattern to walk the Mkay AST and build up the expression tree AST. The .NET framework offers factory methods for creating different kinds of expression nodes, such as Expression.Constant for creating a node that represents a constant value, Expression.And for the logical and operation, Expression.Add for plus and Expression.Call to represent a method call. The various methods naturally vary quite a bit with respect to what parameters they demand and the kind of expressions they return. For instance, Expression.And expects two Expression instances expected to be of type bool and returns an instance of BinaryExpression, also typed as bool. The various overloads of Expression.Call, on the other hand, return instances of MethodCallExpression and typically require a MethodInfo instance to identify the method to be called, as well as parameters to be passed to the method call. And so on and so forth. Pretty pedestrian stuff, nothing difficult.

It’s worth noting that you have to be careful and precise about types, though. For instance, the two sub-expressions passed to Expression.Add must be of the exact same type. An integer and a double cannot really be added together in a .NET program. However, if you add an integer and a double in a C# program, the compiler will make the necessary conversion for you (by turning the integer into a double). When using expression trees you need to handle such conversions manually. That is, you need to identify the type mismatch and see if you can resolve it by converting the type of one of the values into the type of the other. The general problem is known as unification in computer science and involves formulae that will make the head of the uninitiated hurt. However, Mkay takes a very simple approach by performing a lookup of available conversions for the types involved.

When expression tree has been built, we wrap things up by enveloping it in a lambda expression node of type Expression<Func<bool>>. This gives us access to a magical method called Compile. The Compile method is magical because it turns your expression tree into a validation method that can be executed. And of course that’s exactly what we do. If the state of the object is such that the validation method returns true, everything is well. Otherwise, we complain.

Mkay-client-outline

So as you can see, we have rock-solid server-side validation ensuring that we have short names and no suspicious deaths set in the future. We also happen to have a hard-coded earliest birth date, as well as a guarantee against zombies, but the screenshot doesn’t show that.

Now, let’s offer a superior user experience by doing the same checks client-side, as the user fills in the form. To do so, we must implement IClientValidatable, which in turn means we must implement a method called GetClientValidationRules.


public IEnumerable<ModelClientValidationRule> GetClientValidationRules(
ModelMetadata metadata, ControllerContext context)
{
var propName = metadata.PropertyName;
var builder = new JsonBuilder(propName);
var visitor = new ExpVisitor<JObject>(builder);
Tree.Accept(visitor);
var ast = visitor.GetResult();
var json = new JObject(
new JProperty("rule", _ruleSource),
new JProperty("property", propName),
new JProperty("ast", ast));
var rule = new ModelClientValidationRule
{
ErrorMessage = ErrorMessage ?? _defaultErrorMessage,
ValidationType = "mkay"
};
rule.ValidationParameters["rule"] = json.ToString();
yield return rule;
}

The string “mkay” that is set for the ValidationType is essentially a magic string that you need to match up on the JavaScript side. The same goes for the string “rule” that is used as a key for the ValidationParameters dictionary.


jQuery.validator.addMethod("mkay", function (value, element, param) {
"use strict";
var ruledata = JSON && JSON.parse(param) || $.parseJSON(param);
var validator = MKAY.getValidator(ruledata.rule, ruledata.ast);
return validator();
});
jQuery.validator.unobtrusive.adapters.add("mkay", ["rule"], function (options) {
"use strict";
options.rules.mkay = options.params.rule;
options.messages.mkay = options.message;
});

view raw

mkay.jquery.js

hosted with ❤ by GitHub

On the JavaScript side, we have to hook up our client-side validation code to the machinery that is called unobtrusive validation in jQuery. As you can see, the magic strings “mkay” and “rule” appear at various places in the code. Apart from the plumbing, nothing much happens here. A payload of JSON is picked up, deserialized, and passed to a validation function factory thing called MKAY.getValidator. That’s where the JSON AST is turned into an actual JavaScript function. First, though, let’s see an example of a JSON AST.


{
"type": "call",
"value": ">",
"operands": [
{
"type": "property",
"value": "X"
},
{
"type": "call",
"value": "+",
"operands": [
{
"type": "property",
"value": "A"
},
{
"type": "property",
"value": "B"
},
{
"type": "property",
"value": "C"
}
]
}
]
}

This example shows the JSON AST for the Mkay expression “(> X (+ A B C))”. So in other words, the rule states that the value of X should be greater than the sum of A, B and C.

As we saw earlier, the deserialized JSON is passed to a validation function factory. The transformation process is conceptually pretty simple: every node in the JSON AST becomes a function. A function may be composed from simpler functions, or it may simply return a value, such as a string or an integer. The final validation function corresponds to the root node of the AST.

Let’s look at an example. Below, you see pseudo-code for the validation function produced from the JSON AST for the Mkay expression “(> X (+ A B C))”.


function() {
return greaterthanfunction
(
readpropertyfunction("."),
plus-function(
plus-function(
plus-function(
0,
read-property-function("C")),
read-property-function("B")),
read-property-function("A"))
);
}

view raw

pseudoadd.js

hosted with ❤ by GitHub

It is pseudo-code because it shows function names that aren’t really there. I’ve included the names to make it easier to understand how functions are composed conceptually. In reality, the validation function consists exclusively of nested anonymous functions.

An important detail is that function arguments are evaluated lazily. That is, the arguments passed to functions are not values, they are themselves functions capable of returning a value. It is the responsibility of each individual function to actually call the argument functions to retrieve the argument values. Why is this? The reason is that every operation in client-side Mkay is implemented as a function, including the logical operators and and or. Since we want short-circuiting semantics for the logical operators, we only evaluate arguments as long as things go well.


function logical(fun, breaker) {
return function (args) {
var copy = args.slice(0);
while (copy.length > 0) {
var val = copy.pop()();
if (val === breaker) {
return breaker;
}
}
return !breaker;
};
}
var and = logical(function (a, b) { return a && b; }, false);

Here we see how the and function is implemented. The args parameter holds a list of functions that can be evaluated to a boolean value. We evaluate each function in turn, until we reach a function that evaluates to false or we’re done, in which case we return true.

Of course, all evaluations are postponed until we actually invoke the top-level validation function, in which case the evaluations necessary to reach a conclusion are carried out.

That’s all there is to it, really. Now we have client-side validation in Mkay. In practice, it might look like this:

And with that, we’re done. We’ll never have to write a custom validation attribute again, because Mkay is the one validation attribute to rule them all. The code is available here.

Update: Mkay is now available as a nuget package.


The hawk and the tower

Book-tower-w400

Behold the tower of good intentions!

The stack of books shown in the picture is my accumulated backlog of unread books. (Of course, in terms of data structures, it’s not really a stack, it’s more like a priority queue. Although the priority tends to be skewed somewhat towards the most recently purchased books.)

As you can see, the tower is made up of completely awesome, jaw-drop-inducing books. (You can browse them here if you’re interested.) I’m quite convinced that there are no better books out there — except perhaps Knuth, but we already discussed that. (Also, I wasn’t able to locate my copy of Jon Bentley’s Programming Pearls for the picture, otherwise it would be in there somewhere.) And yet I haven’t read any of them! That is, I may have read the introduction, maybe the first chapter, in some cases a bit more. But I don’t think that I’ve read more than 10% of either one.

Now the weird thing is that somewhat pathetically, I’ve always associated a bit of pride with my tower of books – as if merely owning these great books would somehow cause some of their greatness to rub off on me. The notion is clearly absurd – it makes no sense – but that’s the way it has been. Lately though, I can’t really look at the tower without thinking of my favorite Whitman quote, from Song of Myself:

The spotted hawk swoops down and accuses me, he complains of my gab and my loitering.

Gab and loitering indeed! I read about books, I purchase books, I write blog posts about books – what about actually reading them?

Clearly the feeling of pride is inappropriate and unfounded. You take pride in what you’ve done, not what you may or may not do in the future. What does the tower represent? The opposite of action! The absence of accomplishment! It’s a monument over things I haven’t done!

The best thing you can say about the tower is that it shows some ambition and awareness – at least I am knowledgable enough to recognize great books when I see them! I guess that’s something. A morsel. And of course the tower represents potential energy in the sense of potential learning. I have a well of knowledge available, right in front of me, that I can tap into any time I want to. Finally, the tower arguably says something about who I would like to be, what I would like to know. For instance, it is easy by glancing at the tower to infer that I have an interest in functional programming in general and Lisp(s) in particular. That’s good, I suppose – I feel OK about that.

What appears to be lacking, though, is commitment, focus, and getting things done – in particular getting books read! This has deeper repercussions as well, because it casts a shadow of doubt over the proclaimed ambition. If I really wanted to learn all these things, shouldn’t I be reading my books?

Let’s not jump to any conclusions just yet, though. First, let’s hear from the defense. What is keeping me from reading?

Well, there are two primary impediments that come to mind: time and discomfort.

Time is pretty simple. I have a family, hence I don’t have time. Or rather, time is always a scarce resource. After the kids have gone to bed, I have 2-3 hours available to do “things”. For the week in total, I’d say the number of free hours oscillates between 10 and 20. Reading books now has to compete with any number of other activities, both mandatory (doing laundry) and optional (watching a movie) and in-between (spending time with my wife), for a slice of that time. Hence there are limits to the amount of time I both can and am willing to put into reading. The builder of the tower, on the other hand, isn’t aware of time – he just tends to purchase all the books he wants to read. So there’s a gap there. It’s not at all obvious that the time I have available will be nearly sufficient to consume books as quickly as I buy them. In fact, let’s investigate it a little bit by doing a bit of trivial math. Methinks the math will speak volumes about the realism of working my way through the tower.

For instance, say I want to learn Python in greater depth. I decide to work through Zed Shaw’s Learn Python The Hard Way (which is not in my book tower, but it is on my wish list! Oh dear.) It seems like a reasonable way to go. Now, LPTHW consists of 52 chapters. That means that if I work through one chapter per week, that’s an entire year to work through that one book. Obviously I could cut that down to half a year by doing two chapters peer week instead, but I would have to double my reading budget. I could cut it down to three months, but then I’d have to quadruple it. Am I willing to do that? I’m not sure, because truth be told, I don’t actually have a reading budget. So I can’t really answer those questions meaningfully. (I guess that’s an improvement point right there. I should totally make a reading budget. (And a laundry budget. And a movie budget. And a wife budget? Don’t think I’ll get away with that.)) Still, it’s fairly obvious that I have to prioritize quite heavily which books I really want to read – and that working my way through the even parts of the tower is going to take years. Might as well come to terms with that.

And that’s pretty much it for time. Make a budget, prioritize accordingly. The budget cannot be made arbitrarily small and still be meaningful, though. When I read a book, each sentence leaves a rather soft and volatile imprint in my memory. It will get wiped out relatively quickly if I don’t keep at it. There’s a critical mass of sustained reading necessary in order to keep my mind in the book, so to speak. It’s like riding a wave. If I don’t keep with the flow, I will fall off with a splash. Then I will have to backtrack and re-read and try to ride the next wave. If the pattern repeats too many times, I’ll have to start over at page one. Also, the critical reading mass depends on the subject matter – the more complex it is, the more information I need to keep in my mind at the same time, and hence the more intense and sustained reading required to stay abreast.

And that is it for time. Time is pretty simple. Just make sure that the reading budget is sufficient for riding the wave. The second impediment, discomfort, is much more – uh – discomforting.

You see, a common trait among the books in the tower is that they entail learning. The challenge is that any significant act of learning involves some amount of discomfort. When learning something non-trivial, something actually worth learning, there will be resistance. There will be things I don’t understand, at least not immediately – perhaps I may need to read, re-read and re-read again in order to come anywhere near grasping it. That can be frustrating and painful.

The feeling of discomfort is amplified by the fact that my brain is getting older and a bit rusty. The neurons are behaving increasingly anti-socially, they’re grumpy and less willing to make new associations than they used to be. Perhaps they’ve been hanging out with me for too long, I may have a had a bad influence. Anyways, a less flexible brain means even more discomfort and harder work in order to learn something new.

This brings me to the scary part, which I call topic skipping. The problem is that the discomfort of reading a book that requires learning kicks in exactly when introductions are over with, and the material starts offering genuine resistance. (You’ll recall that I’ve read up to 10% of all the books in the tower.) At that point, it’s all too easy to jump ship, abandon the book, and start over with something new. That’s a pretty pathological pattern. In a way, it resembles what is known as disk thrashing; a term used to describe a hard drive that is overworked by moving information between the system memory and virtual memory excessively.

Now according to Wikipedia, that trustworthy workhorse of information that may or may not be trustworthy, a user can do the following to alleviate disk thrashing:

  • Increase the amount of RAM in the computer.
  • Adjust the size of the swap file.
  • Decrease the amount of programs being run on the computer.

In terms of reading, this translates roughly to increasing the size of your brain (a good idea, but requires significant advances in medicine), increasing the amount of time available for reading (I’d like to, but cannot conjure up any more hours in the day) and decreasing the number of books you’re trying to read at once (excellent advice!).

The main problem with both disk thrashing and topic skipping is waste. You appear to be working, but you’re really just spending your time changing topics. Given that time is a scarce resource, it makes no sense to waste it like that. It would be much better if I would just harden up, endure the discomfort of feeling stupid, and resist the temptation of starting over on some new and shiny topic.

So there you have it. Time and discomfort. That’s why I’m not reading books fast enough, that’s why reading efforts often get abandoned after the first chapter, and that’s why my stack of unread books is growing into a veritable tower of Babel. Some defense! Turns out I’m a busy wimp! I’m afraid the spotted hawk won’t be too impressed!

I still have a teeny tiny trick up my sleeve, though. I believe commitment is the antidote to both gab and loitering, and it turns out that involving other people does wonders for creating commitment. So I’m teaming up with some compatriots from work to form a book circle. First book up is SICP which, coincidentally, you’ll find residing at the top of the tower. So there is hope! Which is good, because I totally need to make room for this awesome-looking book which discusses multi-paradigm programming using an obscure language called Oz. How’s that for shiny!


To Knuth or not to Knuth

I received an email with an interesting question a while back. The question was this:

Dear doctor, should I read Knuth?

As you can see, the email was deviously crafted to appeal to my vanity – which of course was a clever thing to do, since it spurred one of those lengthy, slightly self-absorbed responses I’m wont to. However, it also triggered an unexpected chain of events which is now resulting in this blog post. You see, the flattery inflated my ego to the degree that I figured I should follow Scott Hanselman’s advice, and maximize the effectiveness and reach of my keystrokes. I am therefore publishing my response here to the vast readership of my blog. That means you, dear reader! Nothing like a bit of old-fashioned hubris!

So, should you read Knuth? I dunno.

It’s an interesting question, though, with some pretty deep repercussions. In fact, it is interesting enough that Peter Seibel included it as one of the questions he asked each of the prominent programmers he interviewed for the book “Coders at Work“. You should totally read that book, by the way.

Anyways. In case you’re unfamiliar with Knuth, we’re talking about this guy:

Knuth

Professor Donald Knuth

Knuth received the ACM Turing award in 1974 (a year before I was born) for his work on the analysis of algorithms and data structures. That work is captured in the first three volumes of a series of books known as The Art of Computer Programming or TAOCP for short. “Reading Knuth” refers to reading those books.

Of course, that’s a pretty significant argument for reading Knuth right there. The content of the books is worth a Turing award! I bet you don’t have many books in your bookshelf for which that is true. I don’t, and I have awesome books! TAOCP is the quintessential, be-all end-all resource for the study of algorithms. If you are familiar with the so-called big-oh notation for estimating run-time costs of algorithms, that’s due to Knuth’s influence.

One of the quirks of Knuth is that apparently all the examples in the TAOCP use a language called “MIX assembly language”. It runs on a hypothetical computer called “MIX”, dreamed up by Knuth. Clearly, then, the examples are not directly transferable to your day-to-day work as a programmer. And undoubtedly, it can be somewhat off-putting for modern readers to have to learn everything by means of this ancient assembly language for an obsolete machine that never existed. To be fair, Knuth has since come up with MMIX, a “64-bit RISC architecture for the third millenium” (no less), intended to replace the original MIX in TAOCP. I’m not quite sure how far the work of upgrading MIX to MMIX has progressed, though. Also I’m not quite sure that it really rectifies the situation. But YMMV.

A further obstacle is that TAOCP requires significant mathematical sophistication on behalf of its readers. It speaks volumes to me when Guy L. Steele Jr claims he lacks the mathematical background to read parts of it. What about us mere mortals, then?

These challenges conspire to raise some serious doubt: is reading Knuth worth the effort? Clearly it requires a lot of intellectual struggle and perseverance. Isn’t it simply too much trouble to go through? At the same time, the challenges contribute to a further selling point for Knuth: TAOCP as the ultimate rite-of-passage for programmers. Arguably you’ve beaten the game of programming and computers in general when you’ve read Knuth.

So then: do I think you should read Knuth? I dunno.

I feel like it’s an important question to try to answer, though. It’s kind of amusing, really – it reminds me of the opening of “The Myth of Sisyphus” by Albert Camus, 20th century French writer and philosopher. It was one of my favorite books way back when I was young and oh-so much older that I am now. I would be sitting in a cafe, reading French literature and aiming for an air of profoundness and mystery by occasionally raising my head and gazing at the world through faraway eyes. It attracted less girls than you’d think.

Anyways, the opening goes like this:

There is but one truly serious philosophical problem, and that is suicide. Judging whether life is or is not worth living amounts to answering the fundamental question of philosophy. All the rest – whether or not the world has three dimensions, whether the mind has nine or twelve categories – comes afterwards. These are games; one must first answer. And if it is true, as Nietzsche claims, that a philosopher, to deserve our respect, must preach by example, you can appreciate the importance of that reply, for it will precede the definitive act. These are facts the heart can feel; yet they call for careful study before they become clear to the intellect.

Isn’t it just grand? Who cares about girls when you can read stuff like that.

Translated to the Knuth dilemma: if I reach the conclusion that you should read Knuth, I immediately have a problem: it follows suite that I should proceed to read Knuth as well! (Indeed, how can I claim to give a meaningful and convincing recommendation for or against it without actually having read the books first?)

So then: should I read Knuth? I dunno.

So far in my career, I’ve sort of dogded the issue of Knuth and TAOCP. I’ve avoided ordering the books despite their obvious merits because I’ve been quite convinced that they would just be sitting on a bookshelf collecting dust and radiating failure. As long as I don’t actually have the books in my possession, I don’t feel that bad about not having read them. But if I were to own the books, and I still didn’t read them, I would actively have not-read Knuth. Which is clearly a lot worse. Hence I’ve backed out of the challenge thus far by cunningly avoiding purchase. But of course that has more to do with my knowing a thing or two about my innate laziness and resistance to discomfort than anything else. It’s not a good reason to avoid Knuth, but at least a truthful one.

But that’s me, not necessarily you. What about you? Should you read Knuth? I dunno.

In my mind, reading Knuth is similar to choosing to walk three years by foot to some remote location in the mountains, in order to study with some famous zen monk (a thought that is somehow made more amusing by the fact that Knuth bears a certain resemblance to Yoda). Once you’ve made up your mind to study with this monk, you don’t really question why he does things this way or that way – you just do as you’re told. Wax on, wax off, if you like.

Paro-taktsang-tigers-nest-monastery-bhutan-500x375

Taktshang – the Tiger’s Nest monastery

Of course, there is a underlying assumption that you will, at some point, become enlightened. The invisible veil that hindered your sight before (although you were not aware of it) will fall from your eyes, and everything will become clear. But you don’t ask yourself when or how this will happen. I believe this is the attitude with which you need to approach Knuth in general, and the MIX architecture in particular.

However. While I am quite convinced that you will be enlightened should you seek Master Knuth in his monastery, I am equally convinced that there are other paths that lead to enlightenment as well.

In the end, I suppose it comes down to with which monk you want to study. What are you looking for? What is the nature of the enlightenment you seek? If the answer is that you want the most profound understanding possible of how algorithms and data structures correlate with the detailed workings of the the physical machine, you should read Knuth. If the answer is something else, well, perhaps you should read something else.

For my part, I’m not sure I won’t at some point in the future, in fact, walk the long, winding, barefoot road to Knuth. First, though, I want to be a wizard’s apprentice in the school of Abelson and Sussman. There I can study the problems of calculation and abstraction free from the tethering of the underlying hardware. It seems like so much more fun, to be honest. That may be an immature grounds for making such an important decision, but there you go. I’m so much younger now that I’m older, I can afford those kinds of transgressions.


Strings dressed in nested tags

If you read the previous blog post, you might wonder if you can wrap a string in nested tags, you know, something like this:


Func<string, string> nested =
s => s.Tag("td").Colspan("2")
.Width("100")
.Tag("tr")
.Tag("table").Cellpadding("10")
.Border("1");

view raw

NestedFunc.cs

hosted with ❤ by GitHub

And the answer is no. No, you can’t. Well you can, but it’s not going to give you the result you want. For instance, if you apply the transform to the string “Hello”, you’ll get this:

Bad-nesting-round

Which is useless.

The reason is obviously that the Tag method calls following the first one will all be channeled in to the same Tag. Even though there’s an implicit cast to string, there’s nothing in the code triggering that cast. Of course, you could explicitly call ToString on the Tag, like so:


Func<string, string> nested =
s => s.Tag("td").Colspan("2")
.Width("100")
.ToString()
.Tag("tr").ToString()
.Tag("table").Cellpadding("10")
.Border("1");

But that’s admitting defeat, since it breaks the illusion we’re trying to create. Plus it’s ugly.

A better way of working around the problem is to compose simple one-tag transforms, like so:


Func<string, string> cell =
s => s.Tag("td").Colspan("2")
.Width("100");
Func<string, string> row =
s => s.Tag("tr");
Func<string, string> table =
s => s.Tag("table").Cellpadding("10")
.Border("1");
Func<string, string> nested =
s => table(row(cell(s)));

Which is kind of neat and yields the desired result:

Good-nesting-round

But we can attack the problem more directly. There’s not a whole lot we can do to prevent our Tag object from capturing the subsequent method calls to Tag. But we are free to respond to those method calls in any ol’ way we like. A trivial change to TryInvokeMember will do just nicely:


public override bool TryInvokeMember(
InvokeMemberBinder binder,
object[] args,
out object result)
{
string arg = GetValue(args);
string methodName = binder.Name;
if (methodName == "Tag" && arg != null)
{
result = ToString().Tag(arg);
}
else
{
_props[methodName] = arg ?? string.Empty;
result = this;
}
return true;
}

view raw

NestedTags.cs

hosted with ❤ by GitHub

So we just single out calls for a method named Tag with a single string parameter. For those method calls, we’re not going to do the regular fluent collection of method names and parameters thing. Instead, we’ll convert the existing Tag to a string, and return a brand new Tag to wrap that string. And now we can go a-nesting tags as much as we’d like, and still get the result we wanted. Win!


Strings dressed in tags

In a project I’m working on, we needed a simple way of wrapping strings in tags in a custom grid in our ASP.NET MVC application. The strings should only be wrapped given certain conditions. We really wanted to avoid double if checks, you know, once for the opening tag and one for the closing tag?

We ended up using a Func from string to string to perform wrapping as appropriate. By default, the Func would just be the identify function; that is, it would return the string unchanged. When the right conditions were fulfilled, though, we’d replace it with a Func that would create a new string, where the original one was wrapped in the appropriate tag.

The code I came up with lets you write transforms such as this:


Func<string, string> transform =
s => s.Tag("a")
.Href("http://einarwh.posterous.com")
.Style("font-weight: bold");

Which is pretty elegant and compact, don’t you think? Though perhaps a bit unusual. In particular, you might be wondering about the following:

  1. How come there’s a Tag method on the string?
  2. Where do the other methods come from?
  3. How come the return value is a string?

So #1 is easy, right? It has to be an extension method. As you well know, an extension method is just an illusion created by the C# compiler. But it’s a neat illusion that allows for succinct syntax. The extension method looks like this:


public static class StringExtensions
{
public static dynamic Tag(this string content, string name)
{
return new Tag(name, content);
}
}

So it simply creates an instance of the Tag class, passing in the string to be wrapped and the name of the tag. That’s all. So that explains #2 as well, right? Href and Style must be methods defined on the Tag class? Well no. That would be tedious work, since we’d need methods for all possible HTML tag attributes. I’m not doing that.

If you look closely at the signature of the Tag method, you’ll see that it returns an instance of type dynamic. Now what does that mean, exactly? When dynamic was introduced in C# 4, prominent bloggers were all “oooh it’s statically typed as dynamic, my mind is blown, yada yada yada”, you know, posing as if they didn’t have giant brains grokking this stuff pretty easily? It’s not that hard. As usual, the compiler is sugaring the truth for us. Our trusty ol’ friend ILSpy is kind enough to let us figure out what dynamic really means, by revealing all the gunk the compiler spews out in response to it. You’ll find that it introduces a CallSite at the point in code when you’re interacting with the dynamic type, as well as a CallSiteBinder to handle the run-time binding of operations on the CallSite.

We don’t have to deal with all of that, though. Long story short, Tag inherits from DynamicObject, a built-in building block for creating types with potensially interesting dynamic behaviour. DynamicObject exposes several virtual methods that are called during run-time method dispatch. So basically when the run-time is trying to figure out which method to invoke and to invoke it, you’ve got these nice hooks where you can insert your own stuff. Tag, for instance, implements its own version of TryInvokeMember, which is invoked by the run-time to, uh, you know, try to invoke a member? It takes the following arguments:

  • An instance of InvokeMemberBinder (a subtype of CallSiteBinder) which provides run-time binding information.
  • An array of objects containing any arguments passed to the method.
  • An out parameter which should be assigned the return value for the method.

Here is Tag‘s implementation of TryInvokeMember:


public override bool TryInvokeMember(
InvokeMemberBinder binder,
object[] args,
out object result)
{
_props[binder.Name] = GetValue(args) ?? string.Empty;
result = this;
return true;
}
private string GetValue(object[] args)
{
if (args.Length > 0)
{
var arg = args[0] as string;
if (arg != null)
{
return arg;
}
}
return null;
}

What does it do? Well, not a whole lot, really. Essentially it just hamsters values from the method call (the method name and its first argument) in a dictionary. So for instance, when trying to call the Href method in the example above, it’s going to store the value “http://einarwh.posterous.com&#8221; for the key “href”. Simple enough. And what about the return value from the Href method call? We’ll just return the Tag instance itself. That way, we get a nice fluent composition of method calls, all of which end up in the Tag‘s internal dictionary. Finally we return true from TryInvokeMember to indicate that the method call succeeded.

Of course, you’re not going to get any IntelliSense to help you get the attributes for your HTML tags right. If you misspell Href, that’s your problem. There’s no checking of anything, this is all just a trick for getting a compact syntax.

Finally, Tag defines an implicit cast to string, which explains #3. The implicit cast just invokes the ToString method on the Tag instance.


public static implicit operator string(Tag tag)
{
return tag.ToString();
}
public override string ToString()
{
var sb = new StringBuilder();
sb.Append("<").Append(_name);
foreach (var p in _props)
{
sb.Append(" ")
.Append(p.Key.ToLower())
.Append("=\"")
.Append(p.Value)
.Append("\"");
}
sb.Append(">")
.Append(_content)
.Append("</")
.Append(_name)
.Append(">");
return sb.ToString();
}

The ToString method is responsible for actually wrapping the original string in opening and closing tags, as well as injecting any hamstered dictionary entries into the opening tag as attributes.

And that’s it, really. That’s all there is. Here’s the complete code:


namespace DynamicTag
{
class Program
{
static void Main()
{
string s = "blog"
.Tag("a")
.Href("http://einarwh.posterous.com")
.Style("font-weight: bold");
Console.WriteLine(s);
Console.ReadLine();
}
}
public class Tag : DynamicObject
{
private readonly string _name;
private readonly string _content;
private readonly IDictionary<string, string> _props =
new Dictionary<string, string>();
public Tag(string name, string content)
{
_name = name;
_content = content;
}
public override bool TryInvokeMember(
InvokeMemberBinder binder,
object[] args,
out object result)
{
_props[binder.Name] = GetValue(args) ?? string.Empty;
result = this;
return true;
}
private string GetValue(object[] args)
{
if (args.Length > 0)
{
var arg = args[0] as string;
if (arg != null)
{
return arg;
}
}
return null;
}
public override string ToString()
{
var sb = new StringBuilder();
sb.Append("<").Append(_name);
foreach (var p in _props)
{
sb.Append(" ")
.Append(p.Key.ToLower())
.Append("=\"")
.Append(p.Value)
.Append("\"");
}
sb.Append(">")
.Append(_content)
.Append("</")
.Append(_name)
.Append(">");
return sb.ToString();
}
public static implicit operator string(Tag tag)
{
return tag.ToString();
}
}
public static class StringExtensions
{
public static dynamic Tag(this string content, string name)
{
return new Tag(name, content);
}
}
}

view raw

DynamicTag.cs

hosted with ❤ by GitHub


Introducing μnit

Last week, I teamed up with Bjørn Einar (control-engineer gone js-hipster) and Jonas (bona fide smalltalk hacker) to talk about .NET gadgeteer at the NDC 2012 conference in Oslo. .NET gadgeteer is a rapid prototyping platform for embedded devices running the .NET micro framework – a scaled down version of the .NET framework itself. You can read the abstract of our talk here if you like. The talk itself is available online as well. You can view it here.

The purpose of the talk was to push the envelope a bit, and try out things that embedded .NET micro devices aren’t really suited for. We think it’s important for developers to fool around a bit, without considering mundane things like business value. That allows for immersion and engagement in projects that are pure fun.

I started gently though, with a faux-test driven implementation of Conway’s Game of Life. That is, I wrote the implementation of Life first, and then retro-fitted a couple of unit tests to make it seem like I’d followed the rules of the TDD police. That way I could conjure up the illusion of a true software craftsman, when in fact I’d just written a few tests after the implementation was done, regression tests if you will.

I feel like I had a reasonable excuse for cheating though: at the time, there were no unit testing frameworks available for the .NET micro framework. So you know how TDD opponents find it tedious to write the test before the implementation? Well, in this case I would have to write the unit testing framework before writing the test as well. So the barrier to entry was a wee bit higher.

Now in order to create the illusion of proper craftsmanship in retrospect, I did end up writing tests, and in order to do that, I did have to write my own testing framework. So procrastination didn’t really help all that much. But there you go. Goes to show that the TDD police is on to something, I suppose.

Anyways, the testing framework I wrote is called μnit, pronounced [mju:nit]. Which is a terribly clever name, I’m sure you’ll agree. First off, the μ looks very much like a u. So in terms of glyphs, it basically reads like unit. At the same time, the μ is used as a prefix signifying “micro” in the metric system of measurement – which is perfect since it’s written for the .NET *micro* framework. So yeah, it just reeks of clever, that name.

Implementation-wise it’s pretty run-of-the-mill, though. You’ll find that μnit works just about like any other xUnit framework out there. While the .NET micro framework is obviously scaled down compared to the full .NET framework, it is not a toy framework. Among the capabilities it shares with its bigger sibling is reflection, which is the key ingredient in all the xUnit frameworks I know of. Or at least I suppose it is, I haven’t really looked at the source code of any of them. Guess I should. Bound to learn something.

Anyways, the way I think these frameworks work is that you have some mechanics for identifying test methods hanging off of test classes. For each test method, you create an instance of the test class, run the method, and evaluate the result. Since you don’t want to state explicitly which test methods to run, you typically use reflection to identify and run all the test methods instead. At least that’s how μnit works.

One feature that got axed in the .NET micro framework is custom attributes, and hence there can be no [Test] annotation for labelling test methods. So μnit uses naming conventions for identifying test methods instead, just like in jUnit 3 and earlier. But that’s just cosmetics, it doesn’t really change anything. In μnit we use the arbitrary yet common convention that test methods should start with the prefix “Test”. In addition, they must be public, return void and have no parameters. Test classes must inherit from the Fixture base class, and must have a parameterless constructor. All catering for the tiny bit of reflection voodoo necessary to run the tests.

Here’s the Fixture class that all test classes must inherit from:


namespace Mjunit
{
public abstract class Fixture
{
public virtual void Setup() {}
public virtual void Teardown() {}
}
}

view raw

Fixture.cs

hosted with ❤ by GitHub

As you can see, Fixture defines empty virtual methods for set-up and tear-down, named SetUp and TearDown, respectively. Test classes can override these to make something actually happen before and after a test method is run. Conventional stuff.

The task of identifying test methods to run is handler by the TestFinder class.


namespace Mjunit
{
public class TestFinder
{
public ArrayList FindTests(Assembly assembly)
{
var types = assembly.GetTypes();
var fixtures = GetTestFixtures(types);
var groups = GetTestGroups(fixtures);
return groups;
}
private ArrayList GetTestFixtures(Type[] types)
{
var result = new ArrayList();
for (int i = 0; i < types.Length; i++)
{
var t = types[i];
if (t.IsSubclassOf(typeof(Fixture)))
{
result.Add(t);
}
}
return result;
}
private ArrayList GetTestGroups(ArrayList fixtures)
{
var result = new ArrayList();
foreach (Type t in fixtures)
{
var g = new TestGroup(t);
if (g.NumberOfTests > 0)
{
result.Add(g);
}
}
return result;
}
}
}

view raw

TestFinder.cs

hosted with ❤ by GitHub

You might wonder why I’m using the feeble, untyped ArrayList, giving the code that unmistakeable old-school C# 1.1 tinge? The reason is simple: the .NET micro framework doesn’t have generics. But we managed to get by in 2003, we’ll manage now.

What the code does is pretty much what we outlined above: fetch all the types in the assembly, identify the ones that inherit from Fixture, and proceed to create a TestGroup for each test class we find. A TestGroup is just a thin veneer on top of the test class:


namespace Mjunit
{
class TestGroup : IEnumerable
{
private readonly Type _testClass;
private readonly ArrayList _testMethods = new ArrayList();
public TestGroup(Type testClass)
{
_testClass = testClass;
var methods = _testClass.GetMethods();
for (int i = 0; i < methods.Length; i++)
{
var m = methods[i];
if (m.Name.Substring(0, 4) == "Test" &&
m.ReturnType == typeof(void))
{
_testMethods.Add(m);
}
}
}
public Type TestClass
{
get { return _testClass; }
}
public int NumberOfTests
{
get { return _testMethods.Count; }
}
public IEnumerator GetEnumerator()
{
return _testMethods.GetEnumerator();
}
}
}

view raw

TestGroup.cs

hosted with ❤ by GitHub

The TestFinder is used by the TestRunner, which does the bulk of the work in μnit, really. Here it is:


namespace Mjunit
{
public class TestRunner
{
private Thread _thread;
private Assembly _assembly;
private bool _done;
public event TestRunEventHandler SingleTestComplete;
public event TestRunEventHandler TestRunStart;
public event TestRunEventHandler TestRunComplete;
public TestRunner() {}
public TestRunner(ITestClient client)
{
RegisterClient(client);
}
public TestRunner(ArrayList clients)
{
foreach (ITestClient c in clients)
{
RegisterClient(c);
}
}
public bool Done
{
get { return _done; }
}
public void RegisterClient(ITestClient client)
{
TestRunStart += client.OnTestRunStart;
SingleTestComplete += client.OnSingleTestComplete;
TestRunComplete += client.OnTestRunComplete;
}
public void Run(Type type)
{
Run(Assembly.GetAssembly(type));
}
public void Run(Assembly assembly)
{
_assembly = assembly;
_thread = new Thread(DoRun);
_thread.Start();
}
public void Cancel()
{
_thread.Abort();
}
private void DoRun()
{
FireCompleteEvent(TestRunStart, null);
var gr = new TestGroupResult(_assembly.FullName);
try
{
var finder = new TestFinder();
var groups = finder.FindTests(_assembly);
foreach (TestGroup g in groups)
{
gr.AddResult(Run(g));
}
}
catch (Exception ex)
{
Debug.Print(ex.Message);
Debug.Print(ex.StackTrace);
}
FireCompleteEvent(TestRunComplete, gr);
_done = true;
}
private void FireCompleteEvent(TestRunEventHandler handler,
ITestResult result)
{
if (handler != null)
{
var args = new TestRunEventHandlerArgs
{ Result = result };
handler(this, args);
}
}
private TestClassResult Run(TestGroup group)
{
var result = new TestClassResult(group.TestClass);
foreach (MethodInfo m in group)
{
var r = RunTest(m);
FireCompleteEvent(SingleTestComplete, r);
result.AddResult(r);
}
return result;
}
private SingleTestResult RunTest(MethodInfo m)
{
try
{
DoRunTest(m);
return TestPassed(m);
}
catch (AssertFailedException ex)
{
return TestFailed(m, ex);
}
catch (Exception ex)
{
return TestFailedWithException(m, ex);
}
}
private void DoRunTest(MethodInfo method)
{
Fixture testObj = null;
try
{
testObj = GetInstance(method.DeclaringType);
testObj.Setup();
method.Invoke(testObj, new object[0]);
}
finally
{
if (testObj != null)
{
testObj.Teardown();
}
}
}
private Fixture GetInstance(Type testClass)
{
var ctor = testClass.GetConstructor(new Type[0]);
return (Fixture)ctor.Invoke(new object[0]);
}
private SingleTestResult TestFailedWithException(
MethodInfo m, Exception ex)
{
return new SingleTestResult(m, TestOutcome.Fail)
{ Exception = ex };
}
private SingleTestResult TestFailed(
MethodInfo m, AssertFailedException ex)
{
return new SingleTestResult(m, TestOutcome.Fail)
{ AssertFailedException = ex };
}
private SingleTestResult TestPassed(MethodInfo m)
{
return new SingleTestResult(m, TestOutcome.Pass);
}
}
}

view raw

TestRunner.cs

hosted with ❤ by GitHub

That’s a fair amount of code, and quite a few new concepts that haven’t been introduced yet. At a high level, it’s not that complex though. It works as follows. The user of a test runner will typically be interested in notification during the test run. Hence TestRunner exposes three events that fire when the test run starts, when it completes, and when each test has been run respectively. To receive notifications, the user can either hook up to those events directly or register one or more so-called test clients. We’ll look at some examples of test clients later on. To avoid blocking test clients and support cancellation of the test run, the tests run in their own thread.

As you can see from the RunTest method, each test results in a SingleTestResult, containing a TestOutcome of Pass or Fail. I don’t know how terribly useful it is, but μnit currently distinguishes between failures due to failed assertions and failures due to other exceptions. It made sense at the time.

The SingleTestResult instances are aggregated into TestClassResult instances, which in turn are aggregated into a single TestGroupResult instance representing the entire test run. All of these classes implement ITestResult, which looks like this:


namespace Mjunit
{
public interface ITestResult
{
string Name { get; }
TestOutcome Outcome { get; }
int NumberOfTests { get; }
int NumberOfTestsPassed { get; }
int NumberOfTestsFailed { get; }
}
}

view raw

ITestResult.cs

hosted with ❤ by GitHub

Now for a SingleTestResult, the NumberOfTests will obviously be 1, whereas for a TestClassResult it will match the number of SingleTestResult instances contained by the TestClassResult, and similarly for the TestGroupResult.

So that pretty much wraps it up for the core of μnit. Let’s take a look at how it looks at the client side, for someone who might want to use μnit to write some tests. The most convenient thing to do is probably to register a test client; that is, some object that implements ITestClient. ITestClient looks like this:


namespace Mjunit
{
public interface ITestClient
{
void OnTestRunStart(object sender,
TestRunEventHandlerArgs args);
void OnSingleTestComplete(object sender,
TestRunEventHandlerArgs args);
void OnTestRunComplete(object sender,
TestRunEventHandlerArgs args);
}
}

view raw

ITestClient.cs

hosted with ❤ by GitHub

The registered test client will then receive callbacks as appropriate when the tests are running.

In order to be useful, test clients typically need to translate notifications into something that a human can see and act upon if necessary. In the .NET gadgeteer world, it means you need to interact with some hardware.

For the Game of Life implementation (which can be browsed here if you’re interested) I implemented two test clients interacting with elements of the FEZ Spider kit: a DisplayTestClient that shows test results on a small display, and a LedTestClient that simply uses a multicolored LED light to give feedback to the user. Here’s the code for the latter:


namespace Mjunit.Clients.GHI
{
public class LedTestClient : ITestClient
{
private readonly MulticolorLed _led;
private bool _isBlinking;
private bool _hasFailed;
public LedTestClient(MulticolorLed led)
{
_led = led;
Init();
}
public void Init()
{
_led.TurnOff();
_isBlinking = false;
_hasFailed = false;
}
public void OnTestRunStart(object sender,
TestRunEventHandlerArgs args)
{
Init();
}
public void OnTestRunComplete(object sender,
TestRunEventHandlerArgs args)
{
OnAnyTestComplete(sender, args);
}
private void OnAnyTestComplete(object sender,
TestRunEventHandlerArgs args)
{
if (!_hasFailed)
{
if (args.Result.Outcome == TestOutcome.Fail)
{
_led.BlinkRepeatedly(Colors.Red);
_hasFailed = true;
}
else if (!_isBlinking)
{
_led.BlinkRepeatedly(Colors.Green);
_isBlinking = true;
}
}
}
public void OnSingleTestComplete(object sender,
TestRunEventHandlerArgs args)
{
OnAnyTestComplete(sender, args);
}
}
}

As you can see, it starts the test run by turning the LED light off. Then, as individual test results come in, the LED light starts blinking. On the first passing test, it will start blinking green. It will continue to do so until a failing test result comes in, at which point it will switch to blinking red instead. Once it has started blinking red, it will stay red, regardless of subsequent results. So the LedTestClient doesn’t actually tell you which test failed, it just tells you if some test failed. Useful for a sanity check, but not much else. That’s where the DisplayTestClient comes in, since it actually shows the names of the tests as they pass or fail.

How does it look in practice? Here’s a video of μnit tests for Game of Life running on the FEZ Spider. When the tests all succeed, we proceed to run Game of Life. Whee!


A property with a view

I’ve never been much of an early adopter, so now that Silverlight is dead and cold and has been for a while, it seems appropriate for me to blog about it. More specifically, I’d like to write about the chore that is INotifyPropertyChanged and how to make practically all the grunt work go away.

(Actually, I’m not sure that Silverlight is completely dead yet. It may be that Microsoft won’t be pumping much fresh blood into its veins, but that’s not quite the same thing as being dead. A technology isn’t dead as long as there’s a market for it and all that jazz. Assuming that there are some kinds of applications (such as rich line-of-business applications) that are easier to build with Silverlight than HTML5 given the toolsets that are available, I think we’ll find that Silverlight hangs around to haunt us for quite a while. But that’s sort of a side issue. It doesn’t really matter if Silverlight is dead or not, the issue of tackling INotifyPropertyChanged is interesting in and of itself.)

So INotifyPropertyChanged is the hoop you have to jump through to enable Silverlight views to update themselves as the view models they bind to change. It’s hard to envision not wanting the view to update when the view model changes. So typically you’ll want all the properties you bind to, to automatically cause the view to refresh itself. The problem is that this doesn’t happen out of the box. Instead, there’s this cumbersome and tiresome ritual you have to go through where you implement INotifyPropertyChanged, and have all the setters in your view model holler “hey, I’ve changed” by firing the PropertyChanged event. Brains need not apply to do this kind of work; it’s just mind-numbing, repetitive plumbing code. It would be much nicer if the framework would just be intelligent enough to provide the necessary notifications all by itself. Unfortunately, that’s not the case.

Solution: IL weaving

Silver.Needle is the name I use for some code I wrote to do fix that. The basic idea is to use IL manipulation to automatically turn the plain vanilla .NET properties on your view models into view-update-triggering properties with the boring but necessary plumbing just magically *there*. Look ma, no hands!

If you’re unfamiliar with IL manipulation, you might assume that it’s hard to do because it’s sort of low-level and therefore all voodooy and scary. But you’d be wrong. It might have been, without proper tools. Enter the star of this blog post: Mono.Cecil. Mono.Cecil is a library for IL manipulation written by Jb Evain. It is so powerful, it’s almost indecent: you get the feeling that IL manipulation shouldn’t be that easy. But it is, it really is. It’s a walk in the park. And the power trip you get is unbelievable.

Of course, since I rarely have original thoughts, Silver.Needle isn’t unique. You’ll find that Justin Angel described a very similar approach on his blog, more than two years ago. He uses Mono.Cecil too. So do the Kind of Magic and NotifyPropertyWeaver projects, which might be worth checking out if you actually wanted to use something like this in your project. But as always, it’s much more fun and educational to roll your own!

Disclaimer: it is fairly easy to shoot yourself in the foot when you’re meddling with IL directly. I accept no liability if you try to run any of the code included in this blog post and end up injecting IL into your cat, or causing your boss to fail spectacularly at runtime, or encountering any other unfortunate and grotesque mishap as a result of doing so. You have been warned.

Viewable properties

To do the IL manipulation, we need a way to distinguish between properties to tamper with and properties to leave alone. We’ll refer to the former as viewable properties because, you know, they’re able to work with a view?

Silver.Needle gives you two options for indicating that a property is viewable. The first option is to opt-in for individual properties on a class, by annotating each property with the Viewable attribute. The second option is to annotate the entire class as Viewable, and optionally opt-out for individual properties on that class using the Opaque attribute. In either case, the class is considered to be a “view model”, with one or more viewable properties that notify the view of any changes.


public class ViewableAttribute: Attribute {}

So the task solved by Silver.Needle is to perform the IL voodoo necessary to make sure that the output of the C# compiler of this pretty lean and neato code:


public class PersonViewModel
{
[Viewable]
public string Name { get; set; }
}

…is the same as the output generated directly when compiling this cumbersome and clumsy mess:


public class PersonViewModel : INotifyPropertyChanged
{
private string _name;
public event PropertyChangedEventHandler PropertyChanged;
public string Name
{
get
{
return _name;
}
set
{
_name = value;
NotifyViewableProperty("Name");
}
}
private void NotifyViewableProperty(string propertyName)
{
var propertyChanged = this.PropertyChanged;
if (propertyChanged != null)
{
propertyChanged.Invoke(this,
new PropertyChangedEventArgs(propertyName));
}
}
}

We start by using Mono.Cecil to look for types that contain such properties. It’s a simple matter of 1) loading the assembly with Mono.Cecil, 2) iterating over the types in the assembly and 3) iterating over the properties defined for each type. Of course, if we find one or more “view model” types with properties that should perform view notification, we must proceed to do the necessary IL manipulation and write the changed assembly to disk afterwards. The meat of the matter is in scanning an individual type and doing the IL manipulation. We’ll come to that shortly. The surrounding bureaucracy is handled by the NotificationTamperer class.


public class NotificationTamperer : ITamperer
{
private readonly string _assemblyOutputFileName;
public NotificationTamperer() : this("default_tampered.dll") {}
public NotificationTamperer(string assemblyOutputFileName)
{
_assemblyOutputFileName = assemblyOutputFileName;
}
private static AssemblyDefinition ReadSilverlightAssembly(
string assemblyPath)
{
var resolver = new DefaultAssemblyResolver();
resolver.AddSearchDirectory(@"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\Silverlight\v4.0");
var assembly = AssemblyDefinition.ReadAssembly(
assemblyPath,
new ReaderParameters { AssemblyResolver = resolver });
return assembly;
}
public bool TamperWith(string assemblyPath)
{
var assembly = ReadSilverlightAssembly(assemblyPath);
bool result = TamperWith(assembly);
if (result)
{
assembly.Write(_assemblyOutputFileName);
}
return result;
}
private bool TamperWith(AssemblyDefinition assembly)
{
bool result = false;
foreach (TypeDefinition type in assembly.MainModule.Types)
{
result = new TypeTamperer(type).MaybeTamperWith() || result;
}
return result;
}
}

There’s not much going on here worth commenting upon, it’s just the stuff outlined above. I guess the only thing worth noting is that we need to add a reference to the Silverlight assemblies, so that Mono.Cecil can resolve type dependencies as necessary later on. (For simplicity, I just hard-coded the path to the assemblies on my system. Did I mention it’s not quite ready for the enterprise yet?)

The interesting stuff happens in the TypeTamperer. You’ll notice that the TypeTamperer works on a single type, which is passed in to the constructor. This is the type that may or may not contain viewable properites, and may or may not end up being tampered with. The type is represented by a Mono.Cecil TypeDefinition, which has collections for interfaces, methods, fields, events and so forth.

The TypeTamperer does two things. First, it looks for any viewable properties. Second, if any viewable properties were found, it ensures that the type in question implements the INotifyPropertyChanged interface, and that the viewable properties participate in the notification mechanism by raising the PropertyChanged event as appropriate.

Let’s see how the identification happens:


public bool MaybeTamperWith()
{
return _typeDef.IsClass
&& HasPropertiesToTamperWith()
&& ReallyTamperWith();
}
private bool HasPropertiesToTamperWith()
{
FindPropertiesToTamperWith();
return _map.Count > 0;
}
private void FindPropertiesToTamperWith()
{
var isViewableType = IsViewable(_typeDef);
foreach (var prop in _typeDef.Properties
.Where(p => IsViewable(p) || (isViewableType && !IsOpaque(p))))
{
HandlePropertyToNotify(prop);
}
}
private static bool IsViewable(ICustomAttributeProvider item)
{
return HasAttribute(item, ViewableAttributeName);
}
private static bool IsOpaque(ICustomAttributeProvider item)
{
return HasAttribute(item, OpaqueAttributeName);
}
private static bool HasAttribute(ICustomAttributeProvider item,
string attributeName)
{
return item.CustomAttributes.Any(
a => a.AttributeType.Name == attributeName);
}

As you can see, the code is very straight-forward. We just make sure that the type we’re inspecting is a class (as opposed to an interface), and look for viewable properties. If we find a viewable property, the HandlePropertyToNotify method is called. We’ll look at that method in detail later on. For now though, we’ll just note that the property will end up in an IDictionary named _map, so that the ReallyTamperWith method is called, tr
iggering the IL manipulation.

For each of the view model types, we need to make sure that the type implements INotifyPropertyChanged. From an IL manipulation point of view, this entails three things:

  • Adding interface declaration as needed.
  • Adding event declaration as needed.
  • Adding event trigger method as needed.

Silver.Needle tries to play nicely with a complete or partial hand-written implementation of INotifyPropertyChanged. It’s not too hard to do, the main complicating matter being that we need to consider inheritance. The type might inherit from another type (say, ViewModelBase) that implements the interface. Obviously, we shouldn’t do anything in that case. We should only inject implementation code for types that do not already implement the interface, either directly or in a base type. To do this, we need to walk the inheritance chain up to System.Object before we can conclude that the interface is indeed missing and proceed to inject code for the implementation.

https://gist.github.com/2340074

This is still pretty self-explanatory. The most interesting method is TypeImplementsInterface, which calls itself recursively to climb up the inheritance ladder until it either finds a type that implements INotifyPropertyChanged or a type whose base type is null (that would be System.Object).

Implementing the interface

Injecting code to implement the interface consists of two parts, just as if you were implementing the interface by writing source code by hand: 1) injecting the declaration of the interface, and 2) injecting the code to fulfil the contract defined by the interface, that is, the declaration of the PropertyChanged event handler.


private void InjectInterfaceDeclaration()
{
_typeDef.Interfaces.Add(_types.INotifyPropertyChanged);
}

The code to add the interface declaration is utterly trivial: you just add the appropriate type to the TypeDefinition‘s Interfaces collection. You get a first indication of the power of Mono.Cecil right there. You do need to obtain the proper TypeReference (another Mono.Cecil type) though. I’ve created a helper class to make this as simple as I could as well. The code looks like this:


public class TypeResolver
{
private readonly TypeDefinition _typeDef;
private readonly IDictionary<Type, TypeReference> _typeRefs =
new Dictionary<Type, TypeReference>();
private readonly TypeSystem _ts;
private readonly ModuleDefinition _systemModule;
private readonly ModuleDefinition _mscorlibModule;
public TypeResolver(TypeDefinition typeDef)
{
_typeDef = typeDef;
_ts = typeDef.Module.TypeSystem;
Func<string, ModuleDefinition> getModule =
m => typeDef.Module.AssemblyResolver.Resolve(m).MainModule;
_systemModule = getModule("system");
_mscorlibModule = getModule("mscorlib");
}
public TypeReference Object
{
get { return _ts.Object; }
}
public TypeReference String
{
get { return _ts.String; }
}
public TypeReference Void
{
get { return _ts.Void; }
}
public TypeReference INotifyPropertyChanged
{
get { return LookupSystem(typeof(INotifyPropertyChanged)); }
}
public TypeReference PropertyChangedEventHandler
{
get { return LookupSystem(typeof(PropertyChangedEventHandler)); }
}
public TypeReference PropertyChangedEventArgs
{
get { return LookupSystem(typeof(PropertyChangedEventArgs)); }
}
public TypeReference Delegate
{
get { return LookupCore(typeof(Delegate)); }
}
public TypeReference Interlocked
{
get { return LookupCore(typeof(Interlocked)); }
}
private TypeReference LookupCore(Type t)
{
return Lookup(t, _mscorlibModule);
}
private TypeReference LookupSystem(Type t)
{
return Lookup(t, _systemModule);
}
private TypeReference Lookup(Type t, ModuleDefinition moduleDef)
{
if (!_typeRefs.ContainsKey(t))
{
var typeRef = moduleDef.Types.FirstOrDefault(
td => td.FullName == t.FullName);
if (typeRef == null)
{
return null;
}
var importedTypeRef = _typeDef.Module.Import(typeRef);
_typeRefs[t] = importedTypeRef;
}
return _typeRefs[t];
}
}

view raw

TypeResolver.cs

hosted with ❤ by GitHub

Mono.Cecil comes with a built-in TypeSystem type that contains TypeReference objects for the most common types, such as Object and String. For other types, though, you need to use Mono.Cecil’s assembly resolver to get the appropriate TypeReference objects. For convenience, TypeResolver defines properties with TypeReference objects for all the types used by TypeTamperer.

With the interface declaration in place, we need to provide an implementation (otherwise, we get nasty runtime exceptions).

Herein lies a potential hickup which might lead to problems in case the implementer is exceedingly stupid, though. Since Silver.Needle is a proof-of-concept rather than a super-robust enterprise tool, I don’t worry too much about such edge cases. Nevertheless, I try to play nice where I can (and if it’s easy to do), so here goes: The issue is that the view model type might already have a member of some sort named PropertyChanged, even though the type itself doesn’t inherit from INotifyPropertyChanged. If it actually is an event handler such as defined by INotifyPropertyChanged, everything is fine (I just need to make sure that I don’t add it.) The real issue if there is some other member named PropertyChanged, say, a property or a method. I can’t imagine why you’d want to do such a thing, but of course there’s no stopping the inventiveness of the sufficiently stupid programmer. To avoid producing a weird assembly that will fail dramatically during runtime, Silver.Needle will discover the presence of a malplaced, ill-typed PropertyChanged and give up, leaving the type untampered (and hence not implementing INotifyPropertyChanged).

Adding the event handler is a bit more work than you might expect. If you inspect the IL, it becomes abundantly clear that C# provides a good spoonful of syntactic sugar for events. At the IL level, you’ll find that the simple event declaration expands to this:

  • A field for the event handler.
  • An event, which hooks up the field with add and remove methods.
  • Implementation for the add and remove methods.

It’s quite a bit of IL:


field private class [System]System.ComponentModel.PropertyChangedEventHandler PropertyChanged
event [System]System.ComponentModel.PropertyChangedEventHandler PropertyChanged
{
.addon instance void Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::add_PropertyChanged(class [System]System.ComponentModel.PropertyChangedEventHandler)
.removeon instance void Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::remove_PropertyChanged(class [System]System.ComponentModel.PropertyChangedEventHandler)
}
.method public final hidebysig specialname newslot virtual
instance void add_PropertyChanged (
class [System]System.ComponentModel.PropertyChangedEventHandler 'value'
) cil managed
{
.maxstack 3
.locals init (
[0] class [System]System.ComponentModel.PropertyChangedEventHandler,
[1] class [System]System.ComponentModel.PropertyChangedEventHandler,
[2] class [System]System.ComponentModel.PropertyChangedEventHandler
)
IL_0000: ldarg.0
IL_0001: ldfld class [System]System.ComponentModel.PropertyChangedEventHandler Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::PropertyChanged
IL_0006: stloc.0
// loop start (head: IL_0007)
IL_0007: ldloc.0
IL_0008: stloc.1
IL_0009: ldloc.1
IL_000a: ldarg.1
IL_000b: call class [mscorlib]System.Delegate [mscorlib]System.Delegate::Combine(class [mscorlib]System.Delegate, class [mscorlib]System.Delegate)
IL_0010: castclass [System]System.ComponentModel.PropertyChangedEventHandler
IL_0015: stloc.2
IL_0016: ldarg.0
IL_0017: ldflda class [System]System.ComponentModel.PropertyChangedEventHandler Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::PropertyChanged
IL_001c: ldloc.2
IL_001d: ldloc.1
IL_001e: call !!0 [mscorlib]System.Threading.Interlocked::CompareExchange<class [System]System.ComponentModel.PropertyChangedEventHandler>(!!0&, !!0, !!0)
IL_0023: stloc.0
IL_0024: ldloc.0
IL_0025: ldloc.1
IL_0026: bne.un.s IL_0007
// end loop
IL_0028: ret
} // end of method PersonViewModel::add_PropertyChanged
.method public final hidebysig specialname newslot virtual
instance void remove_PropertyChanged (
class [System]System.ComponentModel.PropertyChangedEventHandler 'value'
) cil managed
{
.maxstack 3
.locals init (
[0] class [System]System.ComponentModel.PropertyChangedEventHandler,
[1] class [System]System.ComponentModel.PropertyChangedEventHandler,
[2] class [System]System.ComponentModel.PropertyChangedEventHandler
)
IL_0000: ldarg.0
IL_0001: ldfld class [System]System.ComponentModel.PropertyChangedEventHandler Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::PropertyChanged
IL_0006: stloc.0
// loop start (head: IL_0007)
IL_0007: ldloc.0
IL_0008: stloc.1
IL_0009: ldloc.1
IL_000a: ldarg.1
IL_000b: call class [mscorlib]System.Delegate [mscorlib]System.Delegate::Remove(class [mscorlib]System.Delegate, class [mscorlib]System.Delegate)
IL_0010: castclass [System]System.ComponentModel.PropertyChangedEventHandler
IL_0015: stloc.2
IL_0016: ldarg.0
IL_0017: ldflda class [System]System.ComponentModel.PropertyChangedEventHandler Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::PropertyChanged
IL_001c: ldloc.2
IL_001d: ldloc.1
IL_001e: call !!0 [mscorlib]System.Threading.Interlocked::CompareExchange<class [System]System.ComponentModel.PropertyChangedEventHandler>(!!0&, !!0, !!0)
IL_0023: stloc.0
IL_0024: ldloc.0
IL_0025: ldloc.1
IL_0026: bne.un.s IL_0007
// end loop
IL_0028: ret
} // end of method PersonViewModel::remove_PropertyChanged

view raw

gistfile1.txt

hosted with ❤ by GitHub

The bad news is that it’s up to us to inject all that goo into our type. The good news is that Mono.Cecil makes it fairly easy to do. We’ll get right to it:


private void InjectEventHandler()
{
InjectPropertyChangedField();
InjectEventDeclaration();
}
private void InjectPropertyChangedField()
{
//.field private class [System]System.ComponentModel.PropertyChangedEventHandler PropertyChanged
var field = new FieldDefinition(PropertyChangedFieldName,
FieldAttributes.Private,
_types.PropertyChangedEventHandler);
_typeDef.Fields.Add(field);
}
private void InjectEventDeclaration()
{
// .event [System]System.ComponentModel.PropertyChangedEventHandler PropertyChanged
// {
// .addon instance void Voodoo.ViewModel.GoalViewModel::add_PropertyChanged(class [System]System.ComponentModel.PropertyChangedEventHandler)
// .removeon instance void Voodoo.ViewModel.GoalViewModel::remove_PropertyChanged(class [System]System.ComponentModel.PropertyChangedEventHandler)
// }
var eventDef = new EventDefinition(PropertyChangedFieldName,
EventAttributes.None,
_types.PropertyChangedEventHandler)
{
AddMethod = CreateAddPropertyChangedMethod(),
RemoveMethod = CreateRemovePropertyChangedMethod()
};
_typeDef.Methods.Add(eventDef.AddMethod);
_typeDef.Methods.Add(eventDef.RemoveMethod);
_typeDef.Events.Add(eventDef);
}

Here we add the field for the event handler, and we create an event which hooks up to two methods for adding and removing event handlers, respectively. We’re still not done, though – in fact, the bulk of the nitty gritty work remains.

That bulk is the implementation of the add and remove methods. If you examine the IL, you’ll see that the implementations are virtually identical, except for a single method call in the middle somewhere (add calls a method called Combine, remove calls Remove). We can abstract that out, like so:


private MethodDefinition CreateAddPropertyChangedMethod()
{
return CreatePropertyChangedEventHookupMethod(
"add_PropertyChanged",
"Combine");
}
private MethodDefinition CreateRemovePropertyChangedMethod()
{
return CreatePropertyChangedEventHookupMethod(
"remove_PropertyChanged",
"Remove");
}
private MethodDefinition CreatePropertyChangedEventHookupMethod(
string eventHookupMethodName,
string delegateMethodName)
{
// .method public final hidebysig specialname newslot virtual
// instance void add_PropertyChanged (
// class [System]System.ComponentModel.PropertyChangedEventHandler 'value'
// ) cil managed
var methodDef = new MethodDefinition(eventHookupMethodName,
MethodAttributes.Public |
MethodAttributes.Final |
MethodAttributes.HideBySig |
MethodAttributes.SpecialName |
MethodAttributes.NewSlot |
MethodAttributes.Virtual,
_types.Void);
var paramDef = new ParameterDefinition("value",
ParameterAttributes.None,
_types.PropertyChangedEventHandler);
methodDef.Parameters.Add(paramDef);
methodDef.Body.MaxStackSize = 3;
for (int i = 0; i < 3; i++)
{
var v = new VariableDefinition(_types.PropertyChangedEventHandler);
methodDef.Body.Variables.Add(v);
}
methodDef.Body.InitLocals = true;
var il = methodDef.Body.GetILProcessor();
Action<OpCode> op = x => il.Append(il.Create(x));
// IL_0000: ldarg.0
op(OpCodes.Ldarg_0);
// IL_0001: ldfld class [System]System.ComponentModel.PropertyChangedEventHandler Voodoo.ViewModel.GoalViewModel::PropertyChanged
var eventHandlerFieldDef = _typeDef.Fields
.FirstOrDefault(f => f.Name == PropertyChangedFieldName);
il.Append(il.Create(OpCodes.Ldfld, eventHandlerFieldDef));
// IL_0006: stloc.0
op(OpCodes.Stloc_0);
// // loop start (head: IL_0007)
// IL_0007: ldloc.0
var loopTargetInsn = il.Create(OpCodes.Ldloc_0);
il.Append(loopTargetInsn);
// IL_0008: stloc.1
op(OpCodes.Stloc_1);
// IL_0009: ldloc.1
op(OpCodes.Ldloc_1);
// IL_000a: ldarg.1
op(OpCodes.Ldarg_1);
// IL_000b: call class [mscorlib]System.Delegate [mscorlib]System.Delegate::Combine(class [mscorlib]System.Delegate, class [mscorlib]System.Delegate)
var combineMethodReference = new MethodReference(
delegateMethodName,
_types.Delegate,
_types.Delegate);
var delegateParamDef = new ParameterDefinition(_types.Delegate);
combineMethodReference.Parameters.Add(delegateParamDef);
combineMethodReference.Parameters.Add(delegateParamDef);
il.Append(il.Create(OpCodes.Call, combineMethodReference));
// IL_0010: castclass [System]System.ComponentModel.PropertyChangedEventHandler
il.Append(il.Create(OpCodes.Castclass,
_types.PropertyChangedEventHandler));
// IL_0015: stloc.2
op(OpCodes.Stloc_2);
// IL_0016: ldarg.0
op(OpCodes.Ldarg_0);
// IL_0017: ldflda class [System]System.ComponentModel.PropertyChangedEventHandler Voodoo.ViewModel.GoalViewModel::PropertyChanged
il.Append(il.Create(OpCodes.Ldflda, eventHandlerFieldDef));
// IL_001c: ldloc.2
op(OpCodes.Ldloc_2);
// IL_001d: ldloc.1
op(OpCodes.Ldloc_1);
// IL_001e: call !!0 [mscorlib]System.Threading.Interlocked::CompareExchange<class [System]System.ComponentModel.PropertyChangedEventHandler>(!!0&, !!0, !!0)
// var declaringTypeRef = _typeDef.Module.Import(typeof(Interlocked));
var declaringTypeRef = _types.Interlocked;
var elementMethodRef = new MethodReference(
"CompareExchange",
_types.Void,
declaringTypeRef);
var genParam = new GenericParameter("!!0", elementMethodRef);
elementMethodRef.ReturnType = genParam;
elementMethodRef.GenericParameters.Add(genParam);
var firstParamDef = new ParameterDefinition(
new ByReferenceType(genParam));
var otherParamDef = new ParameterDefinition(genParam);
elementMethodRef.Parameters.Add(firstParamDef);
elementMethodRef.Parameters.Add(otherParamDef);
elementMethodRef.Parameters.Add(otherParamDef);
var genInstanceMethod = new GenericInstanceMethod(elementMethodRef);
genInstanceMethod.GenericArguments.Add(
_types.PropertyChangedEventHandler);
il.Append(il.Create(OpCodes.Call, genInstanceMethod));
// IL_0023: stloc.0
op(OpCodes.Stloc_0);
// IL_0024: ldloc.0
op(OpCodes.Ldloc_0);
// IL_0025: ldloc.1
op(OpCodes.Ldloc_1);
// IL_0026: bne.un.s IL_0007
il.Append(il.Create(OpCodes.Bne_Un_S, loopTargetInsn));
// // end loop
// IL_0028: ret
op(OpCodes.Ret);
return methodDef;
}

It looks a little bit icky at first glance, but it’s actually quite straightforward. You just need to accurately and painstakingly reconstruct the IL statement by statement. As you can see, I’ve left the original IL in the source code as comments, to make it clear what we’re trying to reproduce. It takes patience more than brains.

The final piece of the implementation puzzle is to implement a method for firing the event. Again, Silver.Needle tries to play along with any hand-written code you have. So if you have implemented a method so-and-so to do view notification, it’s quite likely that Silver.Needle will discover it and use it. Basically it will scan all methods in the inheritance chain for your view model, and assume that a method which accepts a single string parameter, returns void and calls PropertyChangedEventHandler.Invoke somewhere in the method body is indeed a notification method.


private static MethodDefinition FindNotificationMethod(
TypeDefinition typeDef,
bool includePrivateMethods = true)
{
foreach (var m in typeDef.Methods.Where(m => includePrivateMethods
|| m.Attributes.HasFlag(MethodAttributes.Public)))
{
if (IsProbableNotificationMethod(m))
{
return m;
}
}
var baseTypeRef = typeDef.BaseType;
if (baseTypeRef.FullName != "System.Object")
{
return FindNotificationMethod(baseTypeRef.Resolve(), false);
}
return null;
}
private static bool IsProbableNotificationMethod(
MethodDefinition methodDef)
{
return methodDef.HasBody
&& IsProbableNotificationMethodWithBody(methodDef);
}
private static bool IsProbableNotificationMethodWithBody(
MethodDefinition methodDef)
{
foreach (var insn in methodDef.Body.Instructions)
{
if (insn.OpCode == OpCodes.Callvirt)
{
var callee = (MethodReference) insn.Operand;
if (callee.Name == "Invoke"
&& callee.DeclaringType.Name == "PropertyChangedEventHandler")
{
return true;
}
}
}
return false;
}

Should Silver.Needle fail to identify an existing notification method, though, there is no problem. After all, it’s perfectly OK to have more than one method that can be used to fire the event. Hence if no notification method is found, one is injected. No sleep lost.

In case no existing notification method was found, we need to provide one. We’re getting used to this kind of code by now:


private MethodDefinition CreateNotificationMethodDefinition()
{
const string MethodName = "NotifyViewableProperty";
var methodDef = new MethodDefinition(MethodName,
MethodAttributes.Private |
MethodAttributes.HideBySig,
this._types.Void);
var paramDef = new ParameterDefinition("propertyName",
ParameterAttributes.None,
_types.String);
methodDef.Parameters.Add(paramDef);
methodDef.Body.MaxStackSize = 4;
var v = new VariableDefinition(_types.PropertyChangedEventHandler);
methodDef.Body.Variables.Add(v);
methodDef.Body.InitLocals = true;
var il = methodDef.Body.GetILProcessor();
Action<OpCode> op = x => il.Append(il.Create(x));
// IL_0000: ldarg.0
op(OpCodes.Ldarg_0);
// IL_0001: ldfld class [System]System.ComponentModel.PropertyChangedEventHandler Voodoo.ViewModel.GoalViewModel::PropertyChanged
var eventHandlerFieldDef = FindEventFieldDeclaration(_typeDef);
il.Append(il.Create(OpCodes.Ldfld, eventHandlerFieldDef));
// IL_0006: stloc.0
op(OpCodes.Stloc_0);
// IL_0007: ldloc.0
op(OpCodes.Ldloc_0);
//IL_0008: brfalse.s IL_0017
var jumpTargetInsn = il.Create(OpCodes.Ret); // See below, IL_0017
il.Append(il.Create(OpCodes.Brfalse_S, jumpTargetInsn));
// IL_000a: ldloc.0
op(OpCodes.Ldloc_0);
// IL_000b: ldarg.0
op(OpCodes.Ldarg_0);
// IL_000c: ldarg.1
op(OpCodes.Ldarg_1);
// IL_000d: newobj instance void [System]System.ComponentModel.PropertyChangedEventArgs::.ctor(string)
var eventArgsTypeRef = _types.PropertyChangedEventArgs;
var ctorRef = new MethodReference(".ctor",
_types.Void,
eventArgsTypeRef);
var ctorParamDef = new ParameterDefinition("propertyName",
ParameterAttributes.None,
_types.String);
ctorRef.Parameters.Add(ctorParamDef);
ctorRef.HasThis = true;
il.Append(il.Create(OpCodes.Newobj, ctorRef));
// IL_0012: callvirt instance void [System]System.ComponentModel.PropertyChangedEventHandler::Invoke(object, class [System]System.ComponentModel.PropertyChangedEventArgs)
var invokeMethodRef = new MethodReference("Invoke",
_types.Void,
_types.PropertyChangedEventHandler);
invokeMethodRef.Parameters.Add(
new ParameterDefinition(_types.Object));
invokeMethodRef.Parameters.Add(
new ParameterDefinition(eventArgsTypeRef));
invokeMethodRef.HasThis = true;
il.Append(il.Create(OpCodes.Callvirt, invokeMethodRef));
// IL_0017: ret
il.Append(jumpTargetInsn);
return methodDef;
}

This produces IL for a NotifyViewableProperty method just like the one we wrote in C# in the “hand-implemented” PersonViewModel above.

Injecting notification

With the interface implementation and notification method in place, we finally come to the fun part – injecting the property notification itself!

Unless you’re the kind of person who use ILSpy or ILDasm regularly, you might wonder if and how it will work with auto-properties – properties where you don’t actually provide any body for the getters and setters. Well, it doesn’t matter. Auto-properties are a C# feature, they don’t exist in IL. So you’ll find there’s a backing field there (albeit with a weird name) that the C# compiler conjured up for you. It’s just syntactic sugar to reduce typing.

What about get-only properties? That is, properties that have getters but no setters? Well, first of all, can they change? Even if they’re get-only? Sure they can. Say you have a property which derives its value from another property. For instance, you might have an Age property which depends upon a BirthDate property, like so:


private DateTime _birthDate;
public DateTime BirthDate
{
get { return _birthDate; }
set { _birthDate = value; }
}
[Viewable]
public int Age
{
get { return DateTime.Now.Years BirthDate.Years; }
}

view raw

BirthDate.cs

hosted with ❤ by GitHub

In the (admittedly unlikely) scenario that the BirthDate changes, the Age will change too. And if Age is a property on a view model that a view will bind to, you’ll want any display of Age to update itself automatically whenever BirthDate changes. How can we do that? Well, if we implemented this by hand, we could add a notification call in BirthDate‘s setter to say that Age changed.


private DateTime _birthDate;
public DateTime BirthDate
{
get { return _birthDate; }
set
{
_birthDate = value;
Notify("Age");
}
}

It feels a little iffy, since it sort of goes the wrong way – the observed knowing about the observer rather than the other way around. But that’s how you’d do it.

Silver.Needle does the same thing for you automatically. That is, for get-only properties, Silver.Needle will inspect the getter to find any calls to getters on other properties on the same object instance. If those properties turn out to have setters, notifications to update the get-only property will be injected there. If those properties are get-only too, the process repeats itself recursively. So you could have chains of properties that depend on properties that depend on properties etc.

To do this correctly, the injection process has two steps. First, we identify which properties depend on which, second, we do the actual IL manipulation to insert the notification calls.

So, first we identify dependencies between properties. In the normal case of a property with a setter of its own, the property will simply depend on itself. (Of course, there might be other properties that depend on it as well.) So for each property with a setter, we build a list of dependent properties – that is, properties that we need to inject notification calls for. Note that while we only do notification for properties tagged as Viewable, we might inject notification calls into the setters of any property on the view model, Viewable or not. (In the example above, you’ll notice that BirthDate is not, in fact, tagged Viewable. When the setter is called, it will announce that Age changed, but not itself!)

The code to register the dependencies between properties is as follows:


private void HandlePropertyToNotify(PropertyDefinition prop)
{
foreach (var affector in FindAffectingProperties(prop, new List<string>()))
{
AddDependency(affector, prop);
}
}
private void AddDependency(PropertyDefinition key,
PropertyDefinition value)
{
if (!_map.ContainsKey(key))
{
_map[key] = new List<PropertyDefinition>();
}
_map[key].Add(value);
}
private List<PropertyDefinition> FindAffectingProperties(
PropertyDefinition prop,
IList<string> seen)
{
if (seen.Any(n => n == prop.Name))
{
return new List<PropertyDefinition>();
}
seen.Add(prop.Name);
if (prop.SetMethod != null)
{
return new List<PropertyDefinition> {prop};
}
if (prop.GetMethod != null)
{
return FindAffectingPropertiesFromGetter(prop.GetMethod, seen);
}
return new List<PropertyDefinition>();
}
private List<PropertyDefinition> FindAffectingPropertiesFromGetter(
MethodDefinition getter,
IList<string> seen)
{
var result = new List<PropertyDefinition>();
foreach (var insn in getter.Body.Instructions)
{
if (insn.OpCode == OpCodes.Call)
{
var methodRef = (MethodReference)insn.Operand;
if (methodRef.Name.StartsWith(PropertyGetterPrefix))
{
// Found an affecting getter inside the current getter!
// Get list of dependencies from this getter.
string affectingPropName = methodRef.Name
.Substring(PropertyGetterPrefix.Length);
var affectingProp = _typeDef.Properties
.FirstOrDefault(p => p.Name == affectingPropName);
if (affectingProp != null)
{
result.AddRange(FindAffectingProperties(affectingProp, seen));
}
}
}
}
return result;
}

So you can see that it’s a recursive process to walk the dependency graph for a get-only property. You’ll notice that there is some code there to recognize that we’ve seen a certain property before, to avoid infinite loops when walking the graph. Of course, it might happen that we don’t find any setters to inject notification into. For instance, it may turn out that a viewable property actually depends on constant values only. In that case, Silver.Needle will simply give up, since there is no place to inject the notification.

When we have the complete list of properties and dependant properties, we can do the actual IL manipulation. That is, for each affecting property, we can inject notifications for all affected properties.

There are two possible strategies for the injection itself: simple and sophisticated. The simple strategy employed by Silver.Needle is to do notification regardless of whether any state change occurs as a result of calling the property setter. For instance, you might have some guard clause deciding whether or not to actually update the field backing the property – a conditional setter if you will. Perhaps you want to write to the backing field only when the value has actually changed. Silver.Needle doesn’t care about that. If the setter is called the view is notified. I believe this makes sense, since the setter is the abstraction boundary for the operation you’re performing, not whatever backing field you may or may not write to. Also, I reckon that it doesn’t *hurt* much to do a couple of superfluous view refreshes.

It would be entirely possible to do something a little bit more sophisticated, though – I just don’t think it’s worth the effort (plus it violates encapsulation, doesn’t it?). If we wanted to, we could use a simple program analysis to distinguish between paths that may or may not result in the view model altering state. Technically, we could take the presence of a stfld IL instruction (which stores a value to a field) as evidence for state change. We could even throw in a little bit of data flow analysis to see if the value passed to the setter was actually on the stack when to the stfld was executed. In that case, we’d interpret “property change” to mean “some field acquires the value passed to the setter”, which may or may not seem right to you. So it could be done, within reason.

Notice, though, the appeal to reason. It’s easy to come up with a setter which results in an observable state change without ever calling stfld. For instance, you could push the value onto a stack instead of storing it in a field, and have the getter return the top element of the stack. Sort of contrived, but it could be done. Or you could pass the value to some method, which may or may not store it somewhere. So you see, it’s hard to do properly in the general case. Hence Silver.Needle keeps things simple, and says that the view should be notified of property change whenever the setter is called. That way, we might do a couple of superfluous notifications, but at least we don’t miss any.

Now we just need to figure out where to inject the notification calls. Obviously it needs to be the last thing you do in the setter, to ensure that any state change has actually occurred before we do the notification (otherwise we’d refresh the view to show a stale property value!). That’s easy if you have a single return point from your setter, somewhat harder if there are are several.

You could of course inject notification calls before each return point. That would give the correct semantics but is a bit brutish and not particularly elegant. Instead, Silver.Needle will essentially perform an extract method refactoring if there is more than one return point. The original body of the property setter is moved to a new method with a name derived from the property name. The property setter is then given a new body, consisting of a call to the new method, followed by necessary notification calls. Nice and tidy.

A third alternative would be to wrap the body of the setter in a try block and perform notification in a finally block. Yes, that would mean that notifications would be given even if an exception is thrown during the execution of the setter. Would that be a problem? No. Why not? Because you shouldn’t throw exceptions in your setters. Again, if you have complex logic in the setters of your view models, I have a meme for you: “View models, you’re doing it wrong”.

So, implementation-wise, we need to support two scenarios: with or without refactoring. In either case, we end up with a setter that has a single return point preceeded by notification calls. As usual, it’s pretty straight-forward to do the necessary alternations to the body of the setter using Mono.Cecil. Here’s
the code:


private void InjectNotification(MethodDefinition methodDef,
IEnumerable<string> propNames)
{
if (_notifyMethodDef == null || methodDef == null)
{
return;
}
if (HasMultipleReturnPoints(methodDef))
{
RefactorSetterAndInjectNotification(methodDef, propNames);
}
else
{
InjectNotificationDirectly(methodDef, propNames);
}
}
private bool HasMultipleReturnPoints(MethodDefinition methodDef)
{
return methodDef.Body.Instructions.Count(
insn => insn.OpCode == OpCodes.Ret) > 1;
}
private void RefactorSetterAndInjectNotification(
MethodDefinition oldMethodDef,
IEnumerable<string> propNames)
{
var methodName = "Refactored" + oldMethodDef.Name
.Substring(PropertySetterPrefix.Length) + "Setter";
var methodDef = new MethodDefinition(methodName,
oldMethodDef.Attributes,
oldMethodDef.ReturnType);
foreach (var oldParamDef in oldMethodDef.Parameters)
{
var paramDef = new ParameterDefinition(
oldParamDef.Name,
oldParamDef.Attributes,
oldParamDef.ParameterType);
methodDef.Parameters.Add(paramDef);
}
methodDef.Body = oldMethodDef.Body;
_typeDef.Methods.Add(methodDef);
oldMethodDef.Body = new MethodBody(oldMethodDef);
var il = oldMethodDef.Body.GetILProcessor();
Action<OpCode> op = x => il.Append(il.Create(x));
op(OpCodes.Ldarg_0);
op(OpCodes.Ldarg_1);
il.Append(il.Create(OpCodes.Call, methodDef));
op(OpCodes.Ret);
InjectNotificationDirectly(oldMethodDef, propNames);
}
private void InjectNotificationDirectly(MethodDefinition methodDef,
IEnumerable<string> propNames)
{
var il = methodDef.Body.GetILProcessor();
var returnInsn = il.Body.Instructions.Last();
foreach (var s in propNames)
{
var loadThis = il.Create(OpCodes.Ldarg_0);
var loadString = il.Create(OpCodes.Ldstr, s);
var callMethod = il.Create(OpCodes.Call, _notifyMethodDef);
il.InsertBefore(returnInsn, loadThis);
il.InsertBefore(returnInsn, loadString);
il.InsertBefore(returnInsn, callMethod);
}
}

The code isn’t too complicated. The MethodDefinition passed to InjectionNotification is the setter method for the property, and propNames contains the names of properties to notify change for when the setter is called. In case of multiple return points from the setter, we perform a bit of crude surgery to separate the method body from the method declaration. We provide a new method definition for the body, with a name derived from the name of the property. While in Dr Frankenstein mode, we proceed to assemble a new method body for the setter. That body consists of three instructions: push the this reference onto the stack, push the value passed to the setter onto the stack, and invoke the new method we just created out of the original method body.

Now we know that the setter has a single return point, and we can inject the notification calls. We just need to loop over the properties to notify, and inject a trio of 1) push this, 2) push property name and 3) invoke notification method for each.

And that’s it, really. We’re done. Mission accomplished, view model complete.

Of course, to make things practical, you’re gonna need a build task and a Visual Studio template as well. I’ll get to that some day.


Recursion for kids

Consider the following problem:

The field vole can have up to 18 litters (batches of offspring) each year, each litter contains up to 8 children. The newborn voles may have offspring of their own after 25 days. How many field voles can a family grow to during the course of a year?

Of course, unless you’re a native English speaker, you might wonder what the heck a field vole is. I know I did.

This a field vole:

Field-vole-500px-border

I’m not really sure if it’s technically a mouse or just a really close relative, but for all our intents and purposes, it sure is. A small, very reproductive mouse.

So, do you have an answer to the problem? No?

To provide a bit of background: this problem was presented to a class of fifth graders. Does that motivate you? Do you have an answer now?

If you do, that’s great, but if you don’t, you probably have a whole litter of questions instead. That’s OK too.

You see, the father of one of those fifth graders is a friend of mine. He emailed this problem to a rather eclectic group of people (including some with PhDs in matematics). Between us, we came up with a list of questions including these:

  • What is the distribution of sexes among the voles?
  • What is the average number of voles per litter? And the distribution?
  • How many voles are gay?
  • How many voles die before they reach a fertile age?
  • How many voles are celibate? Alternatively, how many voles prefer to live without offspring? (Given that voles don’t use prophylactics, these questions yield equivalent results.)
  • Will ugly voles get laid?
  • What is the cheese supply like?
  • Are there cats in the vicinity?

And so on and so forth. Luckily, the fifth grade teacher was able to come up with some constraints for us. Of course, they were rather arbitrary, but perhaps not completely unreasonable:

Each litter contains exactly 8 new voles, 4 females and 4 males. No voles die during the year in question.

That’s great! Given these constraints, we can get to work on a solution.

First, we make the arbitrary choice of associating the offspring with the female voles only. The male voles will be counted as having no offspring at all. While perhaps a bit old fashioned, this greatly simplifies our task. (Of course, we could just as well have opted for the opposite.)

Now we just need to count the offspring of female voles. Since we know that the offspring function is purely deterministic, this isn’t too hard. Given a certain number of days available for reproduction, a female vole we will always yield the same number of offspring. (As if women were idempotent!)

To calculate an answer, we can write a small program.


public class Voles
{
private static int _daysBeforeFirst = 25;
private static int _daysBetween = 20;
private static Dictionary<int, long> _cache =
new Dictionary<int, long>();
public static long F(int days) {
if (!_cache.ContainsKey(days)) {
_cache[days] = F0(days);
}
return _cache[days];
}
private static long F0(int days) {
int end = days _daysBeforeFirst;
if (end < 0) {
return 1;
}
int start = end % _daysBetween;
long count = 0;
for (int d = start; d <= end; d += _daysBetween) {
count += F(d) + 1;
}
return 1 + 4 * count;
}
}

view raw

Voles.cs

hosted with ❤ by GitHub

The F method calculates the total number of offspring for a female vole as a function of how many days it has lived. If you call F with an input of 365 days, you’ll find that the answer is 55,784,398,225. That’s a lot of voles.

How does the algorithm work, though? Well, we assume that we start with a single newborn female vole that has 365 days available to produce offspring (with the first litter arriving after 25 days). Then the number of offspring is given by:

F(365) = 1 + 4 * F(340) + 4 + 4 * F(320) + 4 + … + 4 * F(0) + 4

Of course, you can factor out all the 4’s, like so:

F(365) = 1 + 4 * (F(340) + 1 + F(320 + 1 + … + F(0) + 1)

And that’s pretty much what the code does. In addition, it uses a cache, so that it won’t have to calculate a value twice.

As you might imagine, the kids weren’t really expected to come up with a solution to this problem. Instead, they were supposed to think about recursion and reasonable constraints. Which are noble things to teach kids, for sure. More of that, please.

Nevertheless, I still think the problem kinda sucked. Even if the kids were able to come up with reasonable constraints, they wouldn’t have the tools at hand to produce an answer. Pretty demotivating, I’d say.

My friend’s son was unfazed and cool about it, though. In fact, he was content and confident that the tree structure he started drawing would yield the correct answer, if only he had a sufficiently large piece of paper. How cool is that?


Pix-It Curling

I mentioned cURL in passing in the last blog post. In case you haven’t heard of it: it is a super-useful tool you can use to issue HTTP requests from the command line (and a whole slew of other stuff). Just thought I’d jot down a quick note on how to use it to play around with the pix-it HTTP handler.

It’s a breeze, really. So effortless! If you’ve installed cURL and vim (and added them to your PATH), you can do the whole command-line ninja thing and never take your fingers off the keyboard. Fiddler is a bit more -uh- fiddly for in that respect.

Here’s the work flow in it’s entirety. Repeat as necessary.

Command-prompt-workflow