Self-referential validation in Mkay

…so I implemented eval for Mkay. That sentence doesn’t have a first half, because I couldn’t think of any good reasons for doing so. I happen to think that’s a perfectly valid reason in and by itself, but I fear that’s a minority stance. But it doesn’t really matter. The second half of the sentence is true in any case. I implemented eval for Mkay.

It might be unclear to you exactly what I mean by that, though. What I mean is that Mkay now has a function (called eval) that you can call inside an Mkay expression. That function will take another Mkay expression as a string parameter and produce a boolean result when called. That result will then be used within the original Mkay expression. Still opaque? A concrete example should make it entirely transparent.

public class Guardian
{
public string Rule { get; set; }
[Mkay("eval Rule")]
public string Value { get; set; }
}
view raw Guardian.cs hosted with ❤ by GitHub

So here we have a model that uses eval inside an Mkay expression. How does it work in practice? Have a look:

So what happens in the video is that the rule “(eval Rule)” that annotates the Value property says that you should take the content of the Rule property and interpret that as the rule that the Value property must adher to. It’s sort of like SQL injection, only for Mkay. Isn’t that nice?

The string passed to eval could of course be gleaned from several sources, not just a single property. For instance, the rule “(eval (+ “and ” A ” ” B))” creates and evaluates a validation rule by combining the string “and ” with the value of property A, a space, and the value of property B.

public class Composed
{
public string A { get; set; }
public string B { get; set; }
[Mkay("eval (+ \"and \" A \" \" B)")]
public string Value { get; set; }
}
view raw Composed.cs hosted with ❤ by GitHub

It’s even more amusing if you go all self-referential and Douglas Hofstadter-like, and have the value and the rule be one and the same thing. To accomplish that, all you have to do is annotate your property with “(eval .)”.

public class Self
{
[Mkay("eval .")]
public string Me { get; set; }
[Mkay("eval .")]
public string Too { get; set; }
}
view raw Self.cs hosted with ❤ by GitHub

And then we can do stuff like this:

Can you do anything useful with this? Probably not. But you’ve got to admit it’s pretty cute.


Mkay: One validation attribute to rule them all

If you’ve ever created an ASP.NET MVC application, chances are you’ve used data annotation validators to validate user input to your models. They’re nice, aren’t they? They’re so easy to use, they’re almost like magic. You simply annotate your properties with some declarative constraint, and the good djinns of the framework provide both client-side validation and server-side validation for you. Out of thin air! The client-side validation is implemented in JavaScript and gives rapid feedback to the user during data entry, whereas the server-side validation is implemented in .NET and ensures that the data is valid even if the user should circumvent the client-side validation somehow (an obvious approach would be to disable JavaScript in the browser). Magical.

The only problem with data annotation validators is that they are pretty limited in their semantics. The built-in validation attributes only cover a few of the most common, basic use cases. For instance, you can use the [Required] attribute to ensure that a property is given a value, the [Range] attribute to specify that the property of a value must be between two constant values, or the [RegularExpression] attribute to specify a regular expression that a string property must match. That’s all well and good, but not really suited for sophisticated validation. In case you have stronger constraints or invariants for your data model, you must reach for one of two solutions. You can use the [Remote] attribute, which allows you to do arbitrary validation at the server. In that case, however, you’re doing faux-client-side validation. What really happens behind the scenes is that you fire off an AJAX call to the server. The alternative is to implement your own custom validation attribute, and write validation logic in both .NET and in JavaScript. However, that quickly becomes tiresome. While your custom attribute certainly can do arbitrary model validation, you’ve ended up doing the work that the djinns should be doing for you. There is no magic any more, there is just grunt work. The spell is broken.

Wouldn’t it be terribly nifty if there were some way to just express your sophisticated rules and constraints declaratively as intended, and have someone else do all the hard work? That’s what I thought, too. At this point, you shouldn’t be terribly surprised to learn that such a validation attribute does, in fact, exist. I’ve made it myself. The attribute is called Mkay, and supports a simple rule DSL for expressing pretty much arbitrary constraints on property values using a LISP-like syntax. Why LISP, you ask? For three obvious reasons:

  1. LISP syntax is super-easy to parse.
  2. Any excuse to shoe-horn LISP-like expressions into a string is a good one.
  3. LISP syntax is super-easy to parse.

So that’s the syntax, but exactly what kinds of rules can you express using the Mkay rule DSL? Well, that’s pretty much up to you – that’s the whole point, after all. In an Mkay expression, you can have constant values, property access (to any property on the model), logical operators (and and or), comparison operators (equality, greater-than, less-than and so forth), arithmetic, and a handful of simple functions (such as len for string length, max for selecting max value, now for the current time etc). It’s not too hard to extend it to support additional functions, but obviously they must then be implemented/wired up in JavaScript and .NET code. A contrived example should make it clearer:


public class Person
{
[Mkay("(< (len .) 5)", ErrorMessage = "That's too long, my friend.")]
public string Name { get; set; }
[Mkay("(>= . \"31.01.1900\")")]
public DateTime BirthDate { get; set; }
[Mkay("(<= . (now))")]
public DateTime DeathDate { get; set; }
[Mkay("(and (>= . BirthDate) (<= . DeathDate))")]
public DateTime LastSeen { get; set; }
}

view raw

Person.mkay.cs

hosted with ❤ by GitHub

In case it’s not terribly obvious, the rules indicate that the length of the Name property must be less than 10, the BirthDate must be later than 31.01.1900, the DeathDate must be before the current date, and LastSeen must understandably be some time between birth and death. Makes sense?

If you’ve never seen a LISP expression before, note that LISP operators are put in front of, rather than in between, the values they operate on. This is known as prefix notation as opposed to infix notation (or postfix notation, where the operator comes at the end). An expression like “(< . (now))” should be interpreted as “the value of the DeathDate property should be less than the value of (now)”. From this, you might deduce (correctly) that “.” is used as shorthand for the name of the property being validated. This is the first of three spoonfuls of syntactic sugar employed by Mkay to simplify the syntax for validation rules. The second spoonful is implicit “.” for comparisons, which means that you can actually write “(< (now))” instead of “(< . (now))”. And finally, the third spoonful lets you drop the outermost parentheses, so you end up with “< (now)”. Using simplified syntax, the example looks like this:


public class Person
{
[Mkay("< (len .) 5", ErrorMessage = "That's too long, my friend.")]
public string Name { get; set; }
[Mkay(">= \"31.01.1900\"")]
public DateTime BirthDate { get; set; }
[Mkay("<= (now)")]
public DateTime DeathDate { get; set; }
[Mkay("and (>= BirthDate) (<= DeathDate)")]
public DateTime LastSeen { get; set; }
}

So you can see that the sugar simplifies things quite a bit. Hopefully you’ll also agree that 1) the rules expressed in Mkay are more sophisticated than what the built-in validation attributes support, and 2) you’re pretty much free to write your own arbitrary rules using Mkay, without ever having to write a custom validation attribute again. The magic is back!

However, since this is a technical blog, let’s take a look under the covers to see how things work.

The crux of doing your own custom validation is to create a validation attribute that inherits from ValidationAttribute and implements IClientValidatable. Inheriting from ValidationAttribute is what lets us hook into the server-side validation process, whereas implementing IClientValidatable gives us a chance to send the necessary instructions to the browser to enable client-side validation. We’ll look at both of those things in turn. For now, though, let’s just concentrate on creating an instance of the validation attribute itself. In Mkay, the name of the validation attribute is MkayAttribute. No surprises there.


[AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)]
public class MkayAttribute : ValidationAttribute, IClientValidatable
{
private readonly string _ruleSource;
private readonly string _defaultErrorMessage;
private readonly Lazy<ConsCell> _cell;
public MkayAttribute(string ruleSource)
{
_ruleSource = ruleSource;
_defaultErrorMessage = "Respect '{0}', mkay?".With(ruleSource);
_cell = new Lazy<ConsCell>(() => new ExpParser(ruleSource).Parse());
}
protected ConsCell Tree
{
get { return _cell.Value; }
}
public IEnumerable<ModelClientValidationRule> GetClientValidationRules(
ModelMetadata metadata, ControllerContext context) …
protected override ValidationResult IsValid(
object value, ValidationContext validationContext) …
}

The MkayAttribute constructor takes a single string parameter, which is supposed to contain a valid Mkay expression. Things will blow up at runtime if it doesn’t. The ExpParser class is responsible for parsing the Mkay expression into a suitable data structure known as an abstract syntax tree; AST for short. This happens lazily whenever someone tries to access the AST, which in practice means in the GetClientValidationRules and IsValid methods.

Due to LISP envy, ExpParser (simple as it is) uses lists and atoms as building blocks for the AST. Atoms represent simple things, such as a constant value (such as 10), the name of a property (such as BirthDate) or a symbol representing some operation (such as >). Lists are simply sequences of things, that is, sequences of lists and atoms. In Mkay, lists are built from so-called cons cells which are linked together in a chain. Each cons cell consists of two things, the first of which may be considered the content of the cell (a list or an atom), and the second of which is a reference to another cons cell or a special thing called Nil. So for instance, the Mkay expression “(< (len .) 5)” is represented by the following AST:

Cons-cells

Once we have obtained the Mkay AST, we can use it to drive the client-side and server-side validations. This happens by subjecting the original AST to a two-pronged transformation process, to create two new, technology-specific AST’s: a .NET expression tree for the server-side validation code and a JSON structure for the client-side validation code. At the server side, the expression tree is compiled at runtime into a validation function that is immediately applied. The JSON structure, on the other hand, is sent to the browser where the jQuery validation machinery picks it up, and hands it over to what is essentially a validation function factory. So there’s code generation there, too, in a way, but it happens in the browser. Conceptually, the process looks like this:

Mkay-overview

Let’s look at server-side validation first. To participate in server-side validation, the MkayAttribute overrides the IsValid method inherited from ValidationAttribute. The implementation looks like this:


protected override ValidationResult IsValid(object value, ValidationContext validationContext)
{
var subject = validationContext.ObjectInstance;
var memName = validationContext.MemberName;
if (memName == null)
{
throw new Exception(
"Property name is not set for property with display name " + validationContext.DisplayName
+ ", you should register the MkayValidator with the MkayAttribute in global.asax.");
}
var validator = CreateValidator(subject, memName, Tree);
return validator()
? ValidationResult.Success
: new ValidationResult(ErrorMessage ?? _defaultErrorMessage);
}
private static Func<bool> CreateValidator(object subject, string property, ConsCell ast)
{
var builder = new ExpressionTreeBuilder(subject, property);
var viz = new ExpVisitor<Expression>(builder);
ast.Accept(viz);
var exp = viz.GetResult();
var validator = builder.DeriveFunc(exp).Compile();
return validator;
}

As you can see, the method is passed two parameters, an object called value and a ValidationContext called context. The value parameter is the value of the property we’re validating. The ValidationContext provides – uh – context for the validation, such as a reference to the full object. That’s a good thing, otherwise we wouldn’t be able to access the values of other properties and our efforts would be futile! However, we’re not entirely out of trouble – for some reason, there is no easy way to obtain the name of the property value belongs to! I presume it’s just a silly oversight by the good ASP.NET MVC folks. In fact, there is actually a MemberName property on the ValidationContext, but it is always null! There is a DisplayName which is populated, but that doesn’t have to be unique and hence isn’t a reliable pathway back to the actual parameter.

So what to do, what to do? A brittle solution to this very surprising problem would be to use reflection to flip through stack frames and figure out which property the current instance of the MkayAttribute was used to annotate. I’m sure I could get it to work most of the time. However, there’s a much simpler solution. Since ASP.NET MVC is open source, we can quite literally go to the source to find the root of the problem. In doing so, we find that the problem can be traced back to the Validate method in the DataAnnotationsModelValidator class. For whatever reason, the ValidationContext.MemberName property is not set, even though it would be trivial to do so (like, right before or after DisplayName is set). Luckily, ASP.NET MVC is thoroughly configurable, and so it is entirely possible to substitute your own DataAnnotationsModelValidator for the default one. So that’s what Mkay does:


public class MkayValidator : DataAnnotationsModelValidator<MkayAttribute>
{
public MkayValidator(ModelMetadata metadata, ControllerContext context, MkayAttribute attribute)
: base(metadata, context, attribute)
{}
public override IEnumerable<ModelValidationResult> Validate(object container)
{
var context = new ValidationContext(container ?? Metadata.Model, null, null)
{
DisplayName = Metadata.GetDisplayName(),
MemberName = Metadata.PropertyName
};
var result = Attribute.GetValidationResult(Metadata.Model, context);
if (result != ValidationResult.Success)
{
yield return new ModelValidationResult { Message = result.ErrorMessage };
}
}
}

Finally, the ASP.NET MVC application must be configured to use the replacement validator class. This happens in the Application_Start method in the MvcApplication class, aka global.asax:


public class MvcApplication : System.Web.HttpApplication
{
protected void Application_Start()
{
// … code omitted …
DataAnnotationsModelValidatorProvider.RegisterAdapter(
typeof(MkayAttribute),
typeof(MkayValidator));
}
}

view raw

global.asax.cs

hosted with ❤ by GitHub

In case you forget to wire up the custom validator (which would be terribly silly of you), you’ll find that Mkay throws an exception complaining that the name of the property to validate hasn’t been set.

So we’re back on track, and we know the name of the property we’re trying to validate. Now all we have to do is to somehow transform the AST we built into .NET validation code that we can execute. To do so, we use expression trees. Expression trees allows us to programmatically build a data structure representing .NET code, and then magically transform it into executable code.

We use the venerable visitor pattern to walk the Mkay AST and build up the expression tree AST. The .NET framework offers factory methods for creating different kinds of expression nodes, such as Expression.Constant for creating a node that represents a constant value, Expression.And for the logical and operation, Expression.Add for plus and Expression.Call to represent a method call. The various methods naturally vary quite a bit with respect to what parameters they demand and the kind of expressions they return. For instance, Expression.And expects two Expression instances expected to be of type bool and returns an instance of BinaryExpression, also typed as bool. The various overloads of Expression.Call, on the other hand, return instances of MethodCallExpression and typically require a MethodInfo instance to identify the method to be called, as well as parameters to be passed to the method call. And so on and so forth. Pretty pedestrian stuff, nothing difficult.

It’s worth noting that you have to be careful and precise about types, though. For instance, the two sub-expressions passed to Expression.Add must be of the exact same type. An integer and a double cannot really be added together in a .NET program. However, if you add an integer and a double in a C# program, the compiler will make the necessary conversion for you (by turning the integer into a double). When using expression trees you need to handle such conversions manually. That is, you need to identify the type mismatch and see if you can resolve it by converting the type of one of the values into the type of the other. The general problem is known as unification in computer science and involves formulae that will make the head of the uninitiated hurt. However, Mkay takes a very simple approach by performing a lookup of available conversions for the types involved.

When expression tree has been built, we wrap things up by enveloping it in a lambda expression node of type Expression<Func<bool>>. This gives us access to a magical method called Compile. The Compile method is magical because it turns your expression tree into a validation method that can be executed. And of course that’s exactly what we do. If the state of the object is such that the validation method returns true, everything is well. Otherwise, we complain.

Mkay-client-outline

So as you can see, we have rock-solid server-side validation ensuring that we have short names and no suspicious deaths set in the future. We also happen to have a hard-coded earliest birth date, as well as a guarantee against zombies, but the screenshot doesn’t show that.

Now, let’s offer a superior user experience by doing the same checks client-side, as the user fills in the form. To do so, we must implement IClientValidatable, which in turn means we must implement a method called GetClientValidationRules.


public IEnumerable<ModelClientValidationRule> GetClientValidationRules(
ModelMetadata metadata, ControllerContext context)
{
var propName = metadata.PropertyName;
var builder = new JsonBuilder(propName);
var visitor = new ExpVisitor<JObject>(builder);
Tree.Accept(visitor);
var ast = visitor.GetResult();
var json = new JObject(
new JProperty("rule", _ruleSource),
new JProperty("property", propName),
new JProperty("ast", ast));
var rule = new ModelClientValidationRule
{
ErrorMessage = ErrorMessage ?? _defaultErrorMessage,
ValidationType = "mkay"
};
rule.ValidationParameters["rule"] = json.ToString();
yield return rule;
}

The string “mkay” that is set for the ValidationType is essentially a magic string that you need to match up on the JavaScript side. The same goes for the string “rule” that is used as a key for the ValidationParameters dictionary.


jQuery.validator.addMethod("mkay", function (value, element, param) {
"use strict";
var ruledata = JSON && JSON.parse(param) || $.parseJSON(param);
var validator = MKAY.getValidator(ruledata.rule, ruledata.ast);
return validator();
});
jQuery.validator.unobtrusive.adapters.add("mkay", ["rule"], function (options) {
"use strict";
options.rules.mkay = options.params.rule;
options.messages.mkay = options.message;
});

view raw

mkay.jquery.js

hosted with ❤ by GitHub

On the JavaScript side, we have to hook up our client-side validation code to the machinery that is called unobtrusive validation in jQuery. As you can see, the magic strings “mkay” and “rule” appear at various places in the code. Apart from the plumbing, nothing much happens here. A payload of JSON is picked up, deserialized, and passed to a validation function factory thing called MKAY.getValidator. That’s where the JSON AST is turned into an actual JavaScript function. First, though, let’s see an example of a JSON AST.


{
"type": "call",
"value": ">",
"operands": [
{
"type": "property",
"value": "X"
},
{
"type": "call",
"value": "+",
"operands": [
{
"type": "property",
"value": "A"
},
{
"type": "property",
"value": "B"
},
{
"type": "property",
"value": "C"
}
]
}
]
}

This example shows the JSON AST for the Mkay expression “(> X (+ A B C))”. So in other words, the rule states that the value of X should be greater than the sum of A, B and C.

As we saw earlier, the deserialized JSON is passed to a validation function factory. The transformation process is conceptually pretty simple: every node in the JSON AST becomes a function. A function may be composed from simpler functions, or it may simply return a value, such as a string or an integer. The final validation function corresponds to the root node of the AST.

Let’s look at an example. Below, you see pseudo-code for the validation function produced from the JSON AST for the Mkay expression “(> X (+ A B C))”.


function() {
return greaterthanfunction
(
readpropertyfunction("."),
plus-function(
plus-function(
plus-function(
0,
read-property-function("C")),
read-property-function("B")),
read-property-function("A"))
);
}

view raw

pseudoadd.js

hosted with ❤ by GitHub

It is pseudo-code because it shows function names that aren’t really there. I’ve included the names to make it easier to understand how functions are composed conceptually. In reality, the validation function consists exclusively of nested anonymous functions.

An important detail is that function arguments are evaluated lazily. That is, the arguments passed to functions are not values, they are themselves functions capable of returning a value. It is the responsibility of each individual function to actually call the argument functions to retrieve the argument values. Why is this? The reason is that every operation in client-side Mkay is implemented as a function, including the logical operators and and or. Since we want short-circuiting semantics for the logical operators, we only evaluate arguments as long as things go well.


function logical(fun, breaker) {
return function (args) {
var copy = args.slice(0);
while (copy.length > 0) {
var val = copy.pop()();
if (val === breaker) {
return breaker;
}
}
return !breaker;
};
}
var and = logical(function (a, b) { return a && b; }, false);

Here we see how the and function is implemented. The args parameter holds a list of functions that can be evaluated to a boolean value. We evaluate each function in turn, until we reach a function that evaluates to false or we’re done, in which case we return true.

Of course, all evaluations are postponed until we actually invoke the top-level validation function, in which case the evaluations necessary to reach a conclusion are carried out.

That’s all there is to it, really. Now we have client-side validation in Mkay. In practice, it might look like this:

And with that, we’re done. We’ll never have to write a custom validation attribute again, because Mkay is the one validation attribute to rule them all. The code is available here.

Update: Mkay is now available as a nuget package.


Strings dressed in nested tags

If you read the previous blog post, you might wonder if you can wrap a string in nested tags, you know, something like this:


Func<string, string> nested =
s => s.Tag("td").Colspan("2")
.Width("100")
.Tag("tr")
.Tag("table").Cellpadding("10")
.Border("1");

view raw

NestedFunc.cs

hosted with ❤ by GitHub

And the answer is no. No, you can’t. Well you can, but it’s not going to give you the result you want. For instance, if you apply the transform to the string “Hello”, you’ll get this:

Bad-nesting-round

Which is useless.

The reason is obviously that the Tag method calls following the first one will all be channeled in to the same Tag. Even though there’s an implicit cast to string, there’s nothing in the code triggering that cast. Of course, you could explicitly call ToString on the Tag, like so:


Func<string, string> nested =
s => s.Tag("td").Colspan("2")
.Width("100")
.ToString()
.Tag("tr").ToString()
.Tag("table").Cellpadding("10")
.Border("1");

But that’s admitting defeat, since it breaks the illusion we’re trying to create. Plus it’s ugly.

A better way of working around the problem is to compose simple one-tag transforms, like so:


Func<string, string> cell =
s => s.Tag("td").Colspan("2")
.Width("100");
Func<string, string> row =
s => s.Tag("tr");
Func<string, string> table =
s => s.Tag("table").Cellpadding("10")
.Border("1");
Func<string, string> nested =
s => table(row(cell(s)));

Which is kind of neat and yields the desired result:

Good-nesting-round

But we can attack the problem more directly. There’s not a whole lot we can do to prevent our Tag object from capturing the subsequent method calls to Tag. But we are free to respond to those method calls in any ol’ way we like. A trivial change to TryInvokeMember will do just nicely:


public override bool TryInvokeMember(
InvokeMemberBinder binder,
object[] args,
out object result)
{
string arg = GetValue(args);
string methodName = binder.Name;
if (methodName == "Tag" && arg != null)
{
result = ToString().Tag(arg);
}
else
{
_props[methodName] = arg ?? string.Empty;
result = this;
}
return true;
}

view raw

NestedTags.cs

hosted with ❤ by GitHub

So we just single out calls for a method named Tag with a single string parameter. For those method calls, we’re not going to do the regular fluent collection of method names and parameters thing. Instead, we’ll convert the existing Tag to a string, and return a brand new Tag to wrap that string. And now we can go a-nesting tags as much as we’d like, and still get the result we wanted. Win!


Strings dressed in tags

In a project I’m working on, we needed a simple way of wrapping strings in tags in a custom grid in our ASP.NET MVC application. The strings should only be wrapped given certain conditions. We really wanted to avoid double if checks, you know, once for the opening tag and one for the closing tag?

We ended up using a Func from string to string to perform wrapping as appropriate. By default, the Func would just be the identify function; that is, it would return the string unchanged. When the right conditions were fulfilled, though, we’d replace it with a Func that would create a new string, where the original one was wrapped in the appropriate tag.

The code I came up with lets you write transforms such as this:


Func<string, string> transform =
s => s.Tag("a")
.Href("http://einarwh.posterous.com")
.Style("font-weight: bold");

Which is pretty elegant and compact, don’t you think? Though perhaps a bit unusual. In particular, you might be wondering about the following:

  1. How come there’s a Tag method on the string?
  2. Where do the other methods come from?
  3. How come the return value is a string?

So #1 is easy, right? It has to be an extension method. As you well know, an extension method is just an illusion created by the C# compiler. But it’s a neat illusion that allows for succinct syntax. The extension method looks like this:


public static class StringExtensions
{
public static dynamic Tag(this string content, string name)
{
return new Tag(name, content);
}
}

So it simply creates an instance of the Tag class, passing in the string to be wrapped and the name of the tag. That’s all. So that explains #2 as well, right? Href and Style must be methods defined on the Tag class? Well no. That would be tedious work, since we’d need methods for all possible HTML tag attributes. I’m not doing that.

If you look closely at the signature of the Tag method, you’ll see that it returns an instance of type dynamic. Now what does that mean, exactly? When dynamic was introduced in C# 4, prominent bloggers were all “oooh it’s statically typed as dynamic, my mind is blown, yada yada yada”, you know, posing as if they didn’t have giant brains grokking this stuff pretty easily? It’s not that hard. As usual, the compiler is sugaring the truth for us. Our trusty ol’ friend ILSpy is kind enough to let us figure out what dynamic really means, by revealing all the gunk the compiler spews out in response to it. You’ll find that it introduces a CallSite at the point in code when you’re interacting with the dynamic type, as well as a CallSiteBinder to handle the run-time binding of operations on the CallSite.

We don’t have to deal with all of that, though. Long story short, Tag inherits from DynamicObject, a built-in building block for creating types with potensially interesting dynamic behaviour. DynamicObject exposes several virtual methods that are called during run-time method dispatch. So basically when the run-time is trying to figure out which method to invoke and to invoke it, you’ve got these nice hooks where you can insert your own stuff. Tag, for instance, implements its own version of TryInvokeMember, which is invoked by the run-time to, uh, you know, try to invoke a member? It takes the following arguments:

  • An instance of InvokeMemberBinder (a subtype of CallSiteBinder) which provides run-time binding information.
  • An array of objects containing any arguments passed to the method.
  • An out parameter which should be assigned the return value for the method.

Here is Tag‘s implementation of TryInvokeMember:


public override bool TryInvokeMember(
InvokeMemberBinder binder,
object[] args,
out object result)
{
_props[binder.Name] = GetValue(args) ?? string.Empty;
result = this;
return true;
}
private string GetValue(object[] args)
{
if (args.Length > 0)
{
var arg = args[0] as string;
if (arg != null)
{
return arg;
}
}
return null;
}

What does it do? Well, not a whole lot, really. Essentially it just hamsters values from the method call (the method name and its first argument) in a dictionary. So for instance, when trying to call the Href method in the example above, it’s going to store the value “http://einarwh.posterous.com&#8221; for the key “href”. Simple enough. And what about the return value from the Href method call? We’ll just return the Tag instance itself. That way, we get a nice fluent composition of method calls, all of which end up in the Tag‘s internal dictionary. Finally we return true from TryInvokeMember to indicate that the method call succeeded.

Of course, you’re not going to get any IntelliSense to help you get the attributes for your HTML tags right. If you misspell Href, that’s your problem. There’s no checking of anything, this is all just a trick for getting a compact syntax.

Finally, Tag defines an implicit cast to string, which explains #3. The implicit cast just invokes the ToString method on the Tag instance.


public static implicit operator string(Tag tag)
{
return tag.ToString();
}
public override string ToString()
{
var sb = new StringBuilder();
sb.Append("<").Append(_name);
foreach (var p in _props)
{
sb.Append(" ")
.Append(p.Key.ToLower())
.Append("=\"")
.Append(p.Value)
.Append("\"");
}
sb.Append(">")
.Append(_content)
.Append("</")
.Append(_name)
.Append(">");
return sb.ToString();
}

The ToString method is responsible for actually wrapping the original string in opening and closing tags, as well as injecting any hamstered dictionary entries into the opening tag as attributes.

And that’s it, really. That’s all there is. Here’s the complete code:


namespace DynamicTag
{
class Program
{
static void Main()
{
string s = "blog"
.Tag("a")
.Href("http://einarwh.posterous.com")
.Style("font-weight: bold");
Console.WriteLine(s);
Console.ReadLine();
}
}
public class Tag : DynamicObject
{
private readonly string _name;
private readonly string _content;
private readonly IDictionary<string, string> _props =
new Dictionary<string, string>();
public Tag(string name, string content)
{
_name = name;
_content = content;
}
public override bool TryInvokeMember(
InvokeMemberBinder binder,
object[] args,
out object result)
{
_props[binder.Name] = GetValue(args) ?? string.Empty;
result = this;
return true;
}
private string GetValue(object[] args)
{
if (args.Length > 0)
{
var arg = args[0] as string;
if (arg != null)
{
return arg;
}
}
return null;
}
public override string ToString()
{
var sb = new StringBuilder();
sb.Append("<").Append(_name);
foreach (var p in _props)
{
sb.Append(" ")
.Append(p.Key.ToLower())
.Append("=\"")
.Append(p.Value)
.Append("\"");
}
sb.Append(">")
.Append(_content)
.Append("</")
.Append(_name)
.Append(">");
return sb.ToString();
}
public static implicit operator string(Tag tag)
{
return tag.ToString();
}
}
public static class StringExtensions
{
public static dynamic Tag(this string content, string name)
{
return new Tag(name, content);
}
}
}

view raw

DynamicTag.cs

hosted with ❤ by GitHub


Introducing μnit

Last week, I teamed up with Bjørn Einar (control-engineer gone js-hipster) and Jonas (bona fide smalltalk hacker) to talk about .NET gadgeteer at the NDC 2012 conference in Oslo. .NET gadgeteer is a rapid prototyping platform for embedded devices running the .NET micro framework – a scaled down version of the .NET framework itself. You can read the abstract of our talk here if you like. The talk itself is available online as well. You can view it here.

The purpose of the talk was to push the envelope a bit, and try out things that embedded .NET micro devices aren’t really suited for. We think it’s important for developers to fool around a bit, without considering mundane things like business value. That allows for immersion and engagement in projects that are pure fun.

I started gently though, with a faux-test driven implementation of Conway’s Game of Life. That is, I wrote the implementation of Life first, and then retro-fitted a couple of unit tests to make it seem like I’d followed the rules of the TDD police. That way I could conjure up the illusion of a true software craftsman, when in fact I’d just written a few tests after the implementation was done, regression tests if you will.

I feel like I had a reasonable excuse for cheating though: at the time, there were no unit testing frameworks available for the .NET micro framework. So you know how TDD opponents find it tedious to write the test before the implementation? Well, in this case I would have to write the unit testing framework before writing the test as well. So the barrier to entry was a wee bit higher.

Now in order to create the illusion of proper craftsmanship in retrospect, I did end up writing tests, and in order to do that, I did have to write my own testing framework. So procrastination didn’t really help all that much. But there you go. Goes to show that the TDD police is on to something, I suppose.

Anyways, the testing framework I wrote is called μnit, pronounced [mju:nit]. Which is a terribly clever name, I’m sure you’ll agree. First off, the μ looks very much like a u. So in terms of glyphs, it basically reads like unit. At the same time, the μ is used as a prefix signifying “micro” in the metric system of measurement – which is perfect since it’s written for the .NET *micro* framework. So yeah, it just reeks of clever, that name.

Implementation-wise it’s pretty run-of-the-mill, though. You’ll find that μnit works just about like any other xUnit framework out there. While the .NET micro framework is obviously scaled down compared to the full .NET framework, it is not a toy framework. Among the capabilities it shares with its bigger sibling is reflection, which is the key ingredient in all the xUnit frameworks I know of. Or at least I suppose it is, I haven’t really looked at the source code of any of them. Guess I should. Bound to learn something.

Anyways, the way I think these frameworks work is that you have some mechanics for identifying test methods hanging off of test classes. For each test method, you create an instance of the test class, run the method, and evaluate the result. Since you don’t want to state explicitly which test methods to run, you typically use reflection to identify and run all the test methods instead. At least that’s how μnit works.

One feature that got axed in the .NET micro framework is custom attributes, and hence there can be no [Test] annotation for labelling test methods. So μnit uses naming conventions for identifying test methods instead, just like in jUnit 3 and earlier. But that’s just cosmetics, it doesn’t really change anything. In μnit we use the arbitrary yet common convention that test methods should start with the prefix “Test”. In addition, they must be public, return void and have no parameters. Test classes must inherit from the Fixture base class, and must have a parameterless constructor. All catering for the tiny bit of reflection voodoo necessary to run the tests.

Here’s the Fixture class that all test classes must inherit from:


namespace Mjunit
{
public abstract class Fixture
{
public virtual void Setup() {}
public virtual void Teardown() {}
}
}

view raw

Fixture.cs

hosted with ❤ by GitHub

As you can see, Fixture defines empty virtual methods for set-up and tear-down, named SetUp and TearDown, respectively. Test classes can override these to make something actually happen before and after a test method is run. Conventional stuff.

The task of identifying test methods to run is handler by the TestFinder class.


namespace Mjunit
{
public class TestFinder
{
public ArrayList FindTests(Assembly assembly)
{
var types = assembly.GetTypes();
var fixtures = GetTestFixtures(types);
var groups = GetTestGroups(fixtures);
return groups;
}
private ArrayList GetTestFixtures(Type[] types)
{
var result = new ArrayList();
for (int i = 0; i < types.Length; i++)
{
var t = types[i];
if (t.IsSubclassOf(typeof(Fixture)))
{
result.Add(t);
}
}
return result;
}
private ArrayList GetTestGroups(ArrayList fixtures)
{
var result = new ArrayList();
foreach (Type t in fixtures)
{
var g = new TestGroup(t);
if (g.NumberOfTests > 0)
{
result.Add(g);
}
}
return result;
}
}
}

view raw

TestFinder.cs

hosted with ❤ by GitHub

You might wonder why I’m using the feeble, untyped ArrayList, giving the code that unmistakeable old-school C# 1.1 tinge? The reason is simple: the .NET micro framework doesn’t have generics. But we managed to get by in 2003, we’ll manage now.

What the code does is pretty much what we outlined above: fetch all the types in the assembly, identify the ones that inherit from Fixture, and proceed to create a TestGroup for each test class we find. A TestGroup is just a thin veneer on top of the test class:


namespace Mjunit
{
class TestGroup : IEnumerable
{
private readonly Type _testClass;
private readonly ArrayList _testMethods = new ArrayList();
public TestGroup(Type testClass)
{
_testClass = testClass;
var methods = _testClass.GetMethods();
for (int i = 0; i < methods.Length; i++)
{
var m = methods[i];
if (m.Name.Substring(0, 4) == "Test" &&
m.ReturnType == typeof(void))
{
_testMethods.Add(m);
}
}
}
public Type TestClass
{
get { return _testClass; }
}
public int NumberOfTests
{
get { return _testMethods.Count; }
}
public IEnumerator GetEnumerator()
{
return _testMethods.GetEnumerator();
}
}
}

view raw

TestGroup.cs

hosted with ❤ by GitHub

The TestFinder is used by the TestRunner, which does the bulk of the work in μnit, really. Here it is:


namespace Mjunit
{
public class TestRunner
{
private Thread _thread;
private Assembly _assembly;
private bool _done;
public event TestRunEventHandler SingleTestComplete;
public event TestRunEventHandler TestRunStart;
public event TestRunEventHandler TestRunComplete;
public TestRunner() {}
public TestRunner(ITestClient client)
{
RegisterClient(client);
}
public TestRunner(ArrayList clients)
{
foreach (ITestClient c in clients)
{
RegisterClient(c);
}
}
public bool Done
{
get { return _done; }
}
public void RegisterClient(ITestClient client)
{
TestRunStart += client.OnTestRunStart;
SingleTestComplete += client.OnSingleTestComplete;
TestRunComplete += client.OnTestRunComplete;
}
public void Run(Type type)
{
Run(Assembly.GetAssembly(type));
}
public void Run(Assembly assembly)
{
_assembly = assembly;
_thread = new Thread(DoRun);
_thread.Start();
}
public void Cancel()
{
_thread.Abort();
}
private void DoRun()
{
FireCompleteEvent(TestRunStart, null);
var gr = new TestGroupResult(_assembly.FullName);
try
{
var finder = new TestFinder();
var groups = finder.FindTests(_assembly);
foreach (TestGroup g in groups)
{
gr.AddResult(Run(g));
}
}
catch (Exception ex)
{
Debug.Print(ex.Message);
Debug.Print(ex.StackTrace);
}
FireCompleteEvent(TestRunComplete, gr);
_done = true;
}
private void FireCompleteEvent(TestRunEventHandler handler,
ITestResult result)
{
if (handler != null)
{
var args = new TestRunEventHandlerArgs
{ Result = result };
handler(this, args);
}
}
private TestClassResult Run(TestGroup group)
{
var result = new TestClassResult(group.TestClass);
foreach (MethodInfo m in group)
{
var r = RunTest(m);
FireCompleteEvent(SingleTestComplete, r);
result.AddResult(r);
}
return result;
}
private SingleTestResult RunTest(MethodInfo m)
{
try
{
DoRunTest(m);
return TestPassed(m);
}
catch (AssertFailedException ex)
{
return TestFailed(m, ex);
}
catch (Exception ex)
{
return TestFailedWithException(m, ex);
}
}
private void DoRunTest(MethodInfo method)
{
Fixture testObj = null;
try
{
testObj = GetInstance(method.DeclaringType);
testObj.Setup();
method.Invoke(testObj, new object[0]);
}
finally
{
if (testObj != null)
{
testObj.Teardown();
}
}
}
private Fixture GetInstance(Type testClass)
{
var ctor = testClass.GetConstructor(new Type[0]);
return (Fixture)ctor.Invoke(new object[0]);
}
private SingleTestResult TestFailedWithException(
MethodInfo m, Exception ex)
{
return new SingleTestResult(m, TestOutcome.Fail)
{ Exception = ex };
}
private SingleTestResult TestFailed(
MethodInfo m, AssertFailedException ex)
{
return new SingleTestResult(m, TestOutcome.Fail)
{ AssertFailedException = ex };
}
private SingleTestResult TestPassed(MethodInfo m)
{
return new SingleTestResult(m, TestOutcome.Pass);
}
}
}

view raw

TestRunner.cs

hosted with ❤ by GitHub

That’s a fair amount of code, and quite a few new concepts that haven’t been introduced yet. At a high level, it’s not that complex though. It works as follows. The user of a test runner will typically be interested in notification during the test run. Hence TestRunner exposes three events that fire when the test run starts, when it completes, and when each test has been run respectively. To receive notifications, the user can either hook up to those events directly or register one or more so-called test clients. We’ll look at some examples of test clients later on. To avoid blocking test clients and support cancellation of the test run, the tests run in their own thread.

As you can see from the RunTest method, each test results in a SingleTestResult, containing a TestOutcome of Pass or Fail. I don’t know how terribly useful it is, but μnit currently distinguishes between failures due to failed assertions and failures due to other exceptions. It made sense at the time.

The SingleTestResult instances are aggregated into TestClassResult instances, which in turn are aggregated into a single TestGroupResult instance representing the entire test run. All of these classes implement ITestResult, which looks like this:


namespace Mjunit
{
public interface ITestResult
{
string Name { get; }
TestOutcome Outcome { get; }
int NumberOfTests { get; }
int NumberOfTestsPassed { get; }
int NumberOfTestsFailed { get; }
}
}

view raw

ITestResult.cs

hosted with ❤ by GitHub

Now for a SingleTestResult, the NumberOfTests will obviously be 1, whereas for a TestClassResult it will match the number of SingleTestResult instances contained by the TestClassResult, and similarly for the TestGroupResult.

So that pretty much wraps it up for the core of μnit. Let’s take a look at how it looks at the client side, for someone who might want to use μnit to write some tests. The most convenient thing to do is probably to register a test client; that is, some object that implements ITestClient. ITestClient looks like this:


namespace Mjunit
{
public interface ITestClient
{
void OnTestRunStart(object sender,
TestRunEventHandlerArgs args);
void OnSingleTestComplete(object sender,
TestRunEventHandlerArgs args);
void OnTestRunComplete(object sender,
TestRunEventHandlerArgs args);
}
}

view raw

ITestClient.cs

hosted with ❤ by GitHub

The registered test client will then receive callbacks as appropriate when the tests are running.

In order to be useful, test clients typically need to translate notifications into something that a human can see and act upon if necessary. In the .NET gadgeteer world, it means you need to interact with some hardware.

For the Game of Life implementation (which can be browsed here if you’re interested) I implemented two test clients interacting with elements of the FEZ Spider kit: a DisplayTestClient that shows test results on a small display, and a LedTestClient that simply uses a multicolored LED light to give feedback to the user. Here’s the code for the latter:


namespace Mjunit.Clients.GHI
{
public class LedTestClient : ITestClient
{
private readonly MulticolorLed _led;
private bool _isBlinking;
private bool _hasFailed;
public LedTestClient(MulticolorLed led)
{
_led = led;
Init();
}
public void Init()
{
_led.TurnOff();
_isBlinking = false;
_hasFailed = false;
}
public void OnTestRunStart(object sender,
TestRunEventHandlerArgs args)
{
Init();
}
public void OnTestRunComplete(object sender,
TestRunEventHandlerArgs args)
{
OnAnyTestComplete(sender, args);
}
private void OnAnyTestComplete(object sender,
TestRunEventHandlerArgs args)
{
if (!_hasFailed)
{
if (args.Result.Outcome == TestOutcome.Fail)
{
_led.BlinkRepeatedly(Colors.Red);
_hasFailed = true;
}
else if (!_isBlinking)
{
_led.BlinkRepeatedly(Colors.Green);
_isBlinking = true;
}
}
}
public void OnSingleTestComplete(object sender,
TestRunEventHandlerArgs args)
{
OnAnyTestComplete(sender, args);
}
}
}

As you can see, it starts the test run by turning the LED light off. Then, as individual test results come in, the LED light starts blinking. On the first passing test, it will start blinking green. It will continue to do so until a failing test result comes in, at which point it will switch to blinking red instead. Once it has started blinking red, it will stay red, regardless of subsequent results. So the LedTestClient doesn’t actually tell you which test failed, it just tells you if some test failed. Useful for a sanity check, but not much else. That’s where the DisplayTestClient comes in, since it actually shows the names of the tests as they pass or fail.

How does it look in practice? Here’s a video of μnit tests for Game of Life running on the FEZ Spider. When the tests all succeed, we proceed to run Game of Life. Whee!


A property with a view

I’ve never been much of an early adopter, so now that Silverlight is dead and cold and has been for a while, it seems appropriate for me to blog about it. More specifically, I’d like to write about the chore that is INotifyPropertyChanged and how to make practically all the grunt work go away.

(Actually, I’m not sure that Silverlight is completely dead yet. It may be that Microsoft won’t be pumping much fresh blood into its veins, but that’s not quite the same thing as being dead. A technology isn’t dead as long as there’s a market for it and all that jazz. Assuming that there are some kinds of applications (such as rich line-of-business applications) that are easier to build with Silverlight than HTML5 given the toolsets that are available, I think we’ll find that Silverlight hangs around to haunt us for quite a while. But that’s sort of a side issue. It doesn’t really matter if Silverlight is dead or not, the issue of tackling INotifyPropertyChanged is interesting in and of itself.)

So INotifyPropertyChanged is the hoop you have to jump through to enable Silverlight views to update themselves as the view models they bind to change. It’s hard to envision not wanting the view to update when the view model changes. So typically you’ll want all the properties you bind to, to automatically cause the view to refresh itself. The problem is that this doesn’t happen out of the box. Instead, there’s this cumbersome and tiresome ritual you have to go through where you implement INotifyPropertyChanged, and have all the setters in your view model holler “hey, I’ve changed” by firing the PropertyChanged event. Brains need not apply to do this kind of work; it’s just mind-numbing, repetitive plumbing code. It would be much nicer if the framework would just be intelligent enough to provide the necessary notifications all by itself. Unfortunately, that’s not the case.

Solution: IL weaving

Silver.Needle is the name I use for some code I wrote to do fix that. The basic idea is to use IL manipulation to automatically turn the plain vanilla .NET properties on your view models into view-update-triggering properties with the boring but necessary plumbing just magically *there*. Look ma, no hands!

If you’re unfamiliar with IL manipulation, you might assume that it’s hard to do because it’s sort of low-level and therefore all voodooy and scary. But you’d be wrong. It might have been, without proper tools. Enter the star of this blog post: Mono.Cecil. Mono.Cecil is a library for IL manipulation written by Jb Evain. It is so powerful, it’s almost indecent: you get the feeling that IL manipulation shouldn’t be that easy. But it is, it really is. It’s a walk in the park. And the power trip you get is unbelievable.

Of course, since I rarely have original thoughts, Silver.Needle isn’t unique. You’ll find that Justin Angel described a very similar approach on his blog, more than two years ago. He uses Mono.Cecil too. So do the Kind of Magic and NotifyPropertyWeaver projects, which might be worth checking out if you actually wanted to use something like this in your project. But as always, it’s much more fun and educational to roll your own!

Disclaimer: it is fairly easy to shoot yourself in the foot when you’re meddling with IL directly. I accept no liability if you try to run any of the code included in this blog post and end up injecting IL into your cat, or causing your boss to fail spectacularly at runtime, or encountering any other unfortunate and grotesque mishap as a result of doing so. You have been warned.

Viewable properties

To do the IL manipulation, we need a way to distinguish between properties to tamper with and properties to leave alone. We’ll refer to the former as viewable properties because, you know, they’re able to work with a view?

Silver.Needle gives you two options for indicating that a property is viewable. The first option is to opt-in for individual properties on a class, by annotating each property with the Viewable attribute. The second option is to annotate the entire class as Viewable, and optionally opt-out for individual properties on that class using the Opaque attribute. In either case, the class is considered to be a “view model”, with one or more viewable properties that notify the view of any changes.


public class ViewableAttribute: Attribute {}

So the task solved by Silver.Needle is to perform the IL voodoo necessary to make sure that the output of the C# compiler of this pretty lean and neato code:


public class PersonViewModel
{
[Viewable]
public string Name { get; set; }
}

…is the same as the output generated directly when compiling this cumbersome and clumsy mess:


public class PersonViewModel : INotifyPropertyChanged
{
private string _name;
public event PropertyChangedEventHandler PropertyChanged;
public string Name
{
get
{
return _name;
}
set
{
_name = value;
NotifyViewableProperty("Name");
}
}
private void NotifyViewableProperty(string propertyName)
{
var propertyChanged = this.PropertyChanged;
if (propertyChanged != null)
{
propertyChanged.Invoke(this,
new PropertyChangedEventArgs(propertyName));
}
}
}

We start by using Mono.Cecil to look for types that contain such properties. It’s a simple matter of 1) loading the assembly with Mono.Cecil, 2) iterating over the types in the assembly and 3) iterating over the properties defined for each type. Of course, if we find one or more “view model” types with properties that should perform view notification, we must proceed to do the necessary IL manipulation and write the changed assembly to disk afterwards. The meat of the matter is in scanning an individual type and doing the IL manipulation. We’ll come to that shortly. The surrounding bureaucracy is handled by the NotificationTamperer class.


public class NotificationTamperer : ITamperer
{
private readonly string _assemblyOutputFileName;
public NotificationTamperer() : this("default_tampered.dll") {}
public NotificationTamperer(string assemblyOutputFileName)
{
_assemblyOutputFileName = assemblyOutputFileName;
}
private static AssemblyDefinition ReadSilverlightAssembly(
string assemblyPath)
{
var resolver = new DefaultAssemblyResolver();
resolver.AddSearchDirectory(@"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\Silverlight\v4.0");
var assembly = AssemblyDefinition.ReadAssembly(
assemblyPath,
new ReaderParameters { AssemblyResolver = resolver });
return assembly;
}
public bool TamperWith(string assemblyPath)
{
var assembly = ReadSilverlightAssembly(assemblyPath);
bool result = TamperWith(assembly);
if (result)
{
assembly.Write(_assemblyOutputFileName);
}
return result;
}
private bool TamperWith(AssemblyDefinition assembly)
{
bool result = false;
foreach (TypeDefinition type in assembly.MainModule.Types)
{
result = new TypeTamperer(type).MaybeTamperWith() || result;
}
return result;
}
}

There’s not much going on here worth commenting upon, it’s just the stuff outlined above. I guess the only thing worth noting is that we need to add a reference to the Silverlight assemblies, so that Mono.Cecil can resolve type dependencies as necessary later on. (For simplicity, I just hard-coded the path to the assemblies on my system. Did I mention it’s not quite ready for the enterprise yet?)

The interesting stuff happens in the TypeTamperer. You’ll notice that the TypeTamperer works on a single type, which is passed in to the constructor. This is the type that may or may not contain viewable properites, and may or may not end up being tampered with. The type is represented by a Mono.Cecil TypeDefinition, which has collections for interfaces, methods, fields, events and so forth.

The TypeTamperer does two things. First, it looks for any viewable properties. Second, if any viewable properties were found, it ensures that the type in question implements the INotifyPropertyChanged interface, and that the viewable properties participate in the notification mechanism by raising the PropertyChanged event as appropriate.

Let’s see how the identification happens:


public bool MaybeTamperWith()
{
return _typeDef.IsClass
&& HasPropertiesToTamperWith()
&& ReallyTamperWith();
}
private bool HasPropertiesToTamperWith()
{
FindPropertiesToTamperWith();
return _map.Count > 0;
}
private void FindPropertiesToTamperWith()
{
var isViewableType = IsViewable(_typeDef);
foreach (var prop in _typeDef.Properties
.Where(p => IsViewable(p) || (isViewableType && !IsOpaque(p))))
{
HandlePropertyToNotify(prop);
}
}
private static bool IsViewable(ICustomAttributeProvider item)
{
return HasAttribute(item, ViewableAttributeName);
}
private static bool IsOpaque(ICustomAttributeProvider item)
{
return HasAttribute(item, OpaqueAttributeName);
}
private static bool HasAttribute(ICustomAttributeProvider item,
string attributeName)
{
return item.CustomAttributes.Any(
a => a.AttributeType.Name == attributeName);
}

As you can see, the code is very straight-forward. We just make sure that the type we’re inspecting is a class (as opposed to an interface), and look for viewable properties. If we find a viewable property, the HandlePropertyToNotify method is called. We’ll look at that method in detail later on. For now though, we’ll just note that the property will end up in an IDictionary named _map, so that the ReallyTamperWith method is called, tr
iggering the IL manipulation.

For each of the view model types, we need to make sure that the type implements INotifyPropertyChanged. From an IL manipulation point of view, this entails three things:

  • Adding interface declaration as needed.
  • Adding event declaration as needed.
  • Adding event trigger method as needed.

Silver.Needle tries to play nicely with a complete or partial hand-written implementation of INotifyPropertyChanged. It’s not too hard to do, the main complicating matter being that we need to consider inheritance. The type might inherit from another type (say, ViewModelBase) that implements the interface. Obviously, we shouldn’t do anything in that case. We should only inject implementation code for types that do not already implement the interface, either directly or in a base type. To do this, we need to walk the inheritance chain up to System.Object before we can conclude that the interface is indeed missing and proceed to inject code for the implementation.

https://gist.github.com/2340074

This is still pretty self-explanatory. The most interesting method is TypeImplementsInterface, which calls itself recursively to climb up the inheritance ladder until it either finds a type that implements INotifyPropertyChanged or a type whose base type is null (that would be System.Object).

Implementing the interface

Injecting code to implement the interface consists of two parts, just as if you were implementing the interface by writing source code by hand: 1) injecting the declaration of the interface, and 2) injecting the code to fulfil the contract defined by the interface, that is, the declaration of the PropertyChanged event handler.


private void InjectInterfaceDeclaration()
{
_typeDef.Interfaces.Add(_types.INotifyPropertyChanged);
}

The code to add the interface declaration is utterly trivial: you just add the appropriate type to the TypeDefinition‘s Interfaces collection. You get a first indication of the power of Mono.Cecil right there. You do need to obtain the proper TypeReference (another Mono.Cecil type) though. I’ve created a helper class to make this as simple as I could as well. The code looks like this:


public class TypeResolver
{
private readonly TypeDefinition _typeDef;
private readonly IDictionary<Type, TypeReference> _typeRefs =
new Dictionary<Type, TypeReference>();
private readonly TypeSystem _ts;
private readonly ModuleDefinition _systemModule;
private readonly ModuleDefinition _mscorlibModule;
public TypeResolver(TypeDefinition typeDef)
{
_typeDef = typeDef;
_ts = typeDef.Module.TypeSystem;
Func<string, ModuleDefinition> getModule =
m => typeDef.Module.AssemblyResolver.Resolve(m).MainModule;
_systemModule = getModule("system");
_mscorlibModule = getModule("mscorlib");
}
public TypeReference Object
{
get { return _ts.Object; }
}
public TypeReference String
{
get { return _ts.String; }
}
public TypeReference Void
{
get { return _ts.Void; }
}
public TypeReference INotifyPropertyChanged
{
get { return LookupSystem(typeof(INotifyPropertyChanged)); }
}
public TypeReference PropertyChangedEventHandler
{
get { return LookupSystem(typeof(PropertyChangedEventHandler)); }
}
public TypeReference PropertyChangedEventArgs
{
get { return LookupSystem(typeof(PropertyChangedEventArgs)); }
}
public TypeReference Delegate
{
get { return LookupCore(typeof(Delegate)); }
}
public TypeReference Interlocked
{
get { return LookupCore(typeof(Interlocked)); }
}
private TypeReference LookupCore(Type t)
{
return Lookup(t, _mscorlibModule);
}
private TypeReference LookupSystem(Type t)
{
return Lookup(t, _systemModule);
}
private TypeReference Lookup(Type t, ModuleDefinition moduleDef)
{
if (!_typeRefs.ContainsKey(t))
{
var typeRef = moduleDef.Types.FirstOrDefault(
td => td.FullName == t.FullName);
if (typeRef == null)
{
return null;
}
var importedTypeRef = _typeDef.Module.Import(typeRef);
_typeRefs[t] = importedTypeRef;
}
return _typeRefs[t];
}
}

view raw

TypeResolver.cs

hosted with ❤ by GitHub

Mono.Cecil comes with a built-in TypeSystem type that contains TypeReference objects for the most common types, such as Object and String. For other types, though, you need to use Mono.Cecil’s assembly resolver to get the appropriate TypeReference objects. For convenience, TypeResolver defines properties with TypeReference objects for all the types used by TypeTamperer.

With the interface declaration in place, we need to provide an implementation (otherwise, we get nasty runtime exceptions).

Herein lies a potential hickup which might lead to problems in case the implementer is exceedingly stupid, though. Since Silver.Needle is a proof-of-concept rather than a super-robust enterprise tool, I don’t worry too much about such edge cases. Nevertheless, I try to play nice where I can (and if it’s easy to do), so here goes: The issue is that the view model type might already have a member of some sort named PropertyChanged, even though the type itself doesn’t inherit from INotifyPropertyChanged. If it actually is an event handler such as defined by INotifyPropertyChanged, everything is fine (I just need to make sure that I don’t add it.) The real issue if there is some other member named PropertyChanged, say, a property or a method. I can’t imagine why you’d want to do such a thing, but of course there’s no stopping the inventiveness of the sufficiently stupid programmer. To avoid producing a weird assembly that will fail dramatically during runtime, Silver.Needle will discover the presence of a malplaced, ill-typed PropertyChanged and give up, leaving the type untampered (and hence not implementing INotifyPropertyChanged).

Adding the event handler is a bit more work than you might expect. If you inspect the IL, it becomes abundantly clear that C# provides a good spoonful of syntactic sugar for events. At the IL level, you’ll find that the simple event declaration expands to this:

  • A field for the event handler.
  • An event, which hooks up the field with add and remove methods.
  • Implementation for the add and remove methods.

It’s quite a bit of IL:


field private class [System]System.ComponentModel.PropertyChangedEventHandler PropertyChanged
event [System]System.ComponentModel.PropertyChangedEventHandler PropertyChanged
{
.addon instance void Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::add_PropertyChanged(class [System]System.ComponentModel.PropertyChangedEventHandler)
.removeon instance void Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::remove_PropertyChanged(class [System]System.ComponentModel.PropertyChangedEventHandler)
}
.method public final hidebysig specialname newslot virtual
instance void add_PropertyChanged (
class [System]System.ComponentModel.PropertyChangedEventHandler 'value'
) cil managed
{
.maxstack 3
.locals init (
[0] class [System]System.ComponentModel.PropertyChangedEventHandler,
[1] class [System]System.ComponentModel.PropertyChangedEventHandler,
[2] class [System]System.ComponentModel.PropertyChangedEventHandler
)
IL_0000: ldarg.0
IL_0001: ldfld class [System]System.ComponentModel.PropertyChangedEventHandler Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::PropertyChanged
IL_0006: stloc.0
// loop start (head: IL_0007)
IL_0007: ldloc.0
IL_0008: stloc.1
IL_0009: ldloc.1
IL_000a: ldarg.1
IL_000b: call class [mscorlib]System.Delegate [mscorlib]System.Delegate::Combine(class [mscorlib]System.Delegate, class [mscorlib]System.Delegate)
IL_0010: castclass [System]System.ComponentModel.PropertyChangedEventHandler
IL_0015: stloc.2
IL_0016: ldarg.0
IL_0017: ldflda class [System]System.ComponentModel.PropertyChangedEventHandler Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::PropertyChanged
IL_001c: ldloc.2
IL_001d: ldloc.1
IL_001e: call !!0 [mscorlib]System.Threading.Interlocked::CompareExchange<class [System]System.ComponentModel.PropertyChangedEventHandler>(!!0&, !!0, !!0)
IL_0023: stloc.0
IL_0024: ldloc.0
IL_0025: ldloc.1
IL_0026: bne.un.s IL_0007
// end loop
IL_0028: ret
} // end of method PersonViewModel::add_PropertyChanged
.method public final hidebysig specialname newslot virtual
instance void remove_PropertyChanged (
class [System]System.ComponentModel.PropertyChangedEventHandler 'value'
) cil managed
{
.maxstack 3
.locals init (
[0] class [System]System.ComponentModel.PropertyChangedEventHandler,
[1] class [System]System.ComponentModel.PropertyChangedEventHandler,
[2] class [System]System.ComponentModel.PropertyChangedEventHandler
)
IL_0000: ldarg.0
IL_0001: ldfld class [System]System.ComponentModel.PropertyChangedEventHandler Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::PropertyChanged
IL_0006: stloc.0
// loop start (head: IL_0007)
IL_0007: ldloc.0
IL_0008: stloc.1
IL_0009: ldloc.1
IL_000a: ldarg.1
IL_000b: call class [mscorlib]System.Delegate [mscorlib]System.Delegate::Remove(class [mscorlib]System.Delegate, class [mscorlib]System.Delegate)
IL_0010: castclass [System]System.ComponentModel.PropertyChangedEventHandler
IL_0015: stloc.2
IL_0016: ldarg.0
IL_0017: ldflda class [System]System.ComponentModel.PropertyChangedEventHandler Silver.Needle.Tests.Data.Dependencies.Complex.PersonViewModel::PropertyChanged
IL_001c: ldloc.2
IL_001d: ldloc.1
IL_001e: call !!0 [mscorlib]System.Threading.Interlocked::CompareExchange<class [System]System.ComponentModel.PropertyChangedEventHandler>(!!0&, !!0, !!0)
IL_0023: stloc.0
IL_0024: ldloc.0
IL_0025: ldloc.1
IL_0026: bne.un.s IL_0007
// end loop
IL_0028: ret
} // end of method PersonViewModel::remove_PropertyChanged

view raw

gistfile1.txt

hosted with ❤ by GitHub

The bad news is that it’s up to us to inject all that goo into our type. The good news is that Mono.Cecil makes it fairly easy to do. We’ll get right to it:


private void InjectEventHandler()
{
InjectPropertyChangedField();
InjectEventDeclaration();
}
private void InjectPropertyChangedField()
{
//.field private class [System]System.ComponentModel.PropertyChangedEventHandler PropertyChanged
var field = new FieldDefinition(PropertyChangedFieldName,
FieldAttributes.Private,
_types.PropertyChangedEventHandler);
_typeDef.Fields.Add(field);
}
private void InjectEventDeclaration()
{
// .event [System]System.ComponentModel.PropertyChangedEventHandler PropertyChanged
// {
// .addon instance void Voodoo.ViewModel.GoalViewModel::add_PropertyChanged(class [System]System.ComponentModel.PropertyChangedEventHandler)
// .removeon instance void Voodoo.ViewModel.GoalViewModel::remove_PropertyChanged(class [System]System.ComponentModel.PropertyChangedEventHandler)
// }
var eventDef = new EventDefinition(PropertyChangedFieldName,
EventAttributes.None,
_types.PropertyChangedEventHandler)
{
AddMethod = CreateAddPropertyChangedMethod(),
RemoveMethod = CreateRemovePropertyChangedMethod()
};
_typeDef.Methods.Add(eventDef.AddMethod);
_typeDef.Methods.Add(eventDef.RemoveMethod);
_typeDef.Events.Add(eventDef);
}

Here we add the field for the event handler, and we create an event which hooks up to two methods for adding and removing event handlers, respectively. We’re still not done, though – in fact, the bulk of the nitty gritty work remains.

That bulk is the implementation of the add and remove methods. If you examine the IL, you’ll see that the implementations are virtually identical, except for a single method call in the middle somewhere (add calls a method called Combine, remove calls Remove). We can abstract that out, like so:


private MethodDefinition CreateAddPropertyChangedMethod()
{
return CreatePropertyChangedEventHookupMethod(
"add_PropertyChanged",
"Combine");
}
private MethodDefinition CreateRemovePropertyChangedMethod()
{
return CreatePropertyChangedEventHookupMethod(
"remove_PropertyChanged",
"Remove");
}
private MethodDefinition CreatePropertyChangedEventHookupMethod(
string eventHookupMethodName,
string delegateMethodName)
{
// .method public final hidebysig specialname newslot virtual
// instance void add_PropertyChanged (
// class [System]System.ComponentModel.PropertyChangedEventHandler 'value'
// ) cil managed
var methodDef = new MethodDefinition(eventHookupMethodName,
MethodAttributes.Public |
MethodAttributes.Final |
MethodAttributes.HideBySig |
MethodAttributes.SpecialName |
MethodAttributes.NewSlot |
MethodAttributes.Virtual,
_types.Void);
var paramDef = new ParameterDefinition("value",
ParameterAttributes.None,
_types.PropertyChangedEventHandler);
methodDef.Parameters.Add(paramDef);
methodDef.Body.MaxStackSize = 3;
for (int i = 0; i < 3; i++)
{
var v = new VariableDefinition(_types.PropertyChangedEventHandler);
methodDef.Body.Variables.Add(v);
}
methodDef.Body.InitLocals = true;
var il = methodDef.Body.GetILProcessor();
Action<OpCode> op = x => il.Append(il.Create(x));
// IL_0000: ldarg.0
op(OpCodes.Ldarg_0);
// IL_0001: ldfld class [System]System.ComponentModel.PropertyChangedEventHandler Voodoo.ViewModel.GoalViewModel::PropertyChanged
var eventHandlerFieldDef = _typeDef.Fields
.FirstOrDefault(f => f.Name == PropertyChangedFieldName);
il.Append(il.Create(OpCodes.Ldfld, eventHandlerFieldDef));
// IL_0006: stloc.0
op(OpCodes.Stloc_0);
// // loop start (head: IL_0007)
// IL_0007: ldloc.0
var loopTargetInsn = il.Create(OpCodes.Ldloc_0);
il.Append(loopTargetInsn);
// IL_0008: stloc.1
op(OpCodes.Stloc_1);
// IL_0009: ldloc.1
op(OpCodes.Ldloc_1);
// IL_000a: ldarg.1
op(OpCodes.Ldarg_1);
// IL_000b: call class [mscorlib]System.Delegate [mscorlib]System.Delegate::Combine(class [mscorlib]System.Delegate, class [mscorlib]System.Delegate)
var combineMethodReference = new MethodReference(
delegateMethodName,
_types.Delegate,
_types.Delegate);
var delegateParamDef = new ParameterDefinition(_types.Delegate);
combineMethodReference.Parameters.Add(delegateParamDef);
combineMethodReference.Parameters.Add(delegateParamDef);
il.Append(il.Create(OpCodes.Call, combineMethodReference));
// IL_0010: castclass [System]System.ComponentModel.PropertyChangedEventHandler
il.Append(il.Create(OpCodes.Castclass,
_types.PropertyChangedEventHandler));
// IL_0015: stloc.2
op(OpCodes.Stloc_2);
// IL_0016: ldarg.0
op(OpCodes.Ldarg_0);
// IL_0017: ldflda class [System]System.ComponentModel.PropertyChangedEventHandler Voodoo.ViewModel.GoalViewModel::PropertyChanged
il.Append(il.Create(OpCodes.Ldflda, eventHandlerFieldDef));
// IL_001c: ldloc.2
op(OpCodes.Ldloc_2);
// IL_001d: ldloc.1
op(OpCodes.Ldloc_1);
// IL_001e: call !!0 [mscorlib]System.Threading.Interlocked::CompareExchange<class [System]System.ComponentModel.PropertyChangedEventHandler>(!!0&, !!0, !!0)
// var declaringTypeRef = _typeDef.Module.Import(typeof(Interlocked));
var declaringTypeRef = _types.Interlocked;
var elementMethodRef = new MethodReference(
"CompareExchange",
_types.Void,
declaringTypeRef);
var genParam = new GenericParameter("!!0", elementMethodRef);
elementMethodRef.ReturnType = genParam;
elementMethodRef.GenericParameters.Add(genParam);
var firstParamDef = new ParameterDefinition(
new ByReferenceType(genParam));
var otherParamDef = new ParameterDefinition(genParam);
elementMethodRef.Parameters.Add(firstParamDef);
elementMethodRef.Parameters.Add(otherParamDef);
elementMethodRef.Parameters.Add(otherParamDef);
var genInstanceMethod = new GenericInstanceMethod(elementMethodRef);
genInstanceMethod.GenericArguments.Add(
_types.PropertyChangedEventHandler);
il.Append(il.Create(OpCodes.Call, genInstanceMethod));
// IL_0023: stloc.0
op(OpCodes.Stloc_0);
// IL_0024: ldloc.0
op(OpCodes.Ldloc_0);
// IL_0025: ldloc.1
op(OpCodes.Ldloc_1);
// IL_0026: bne.un.s IL_0007
il.Append(il.Create(OpCodes.Bne_Un_S, loopTargetInsn));
// // end loop
// IL_0028: ret
op(OpCodes.Ret);
return methodDef;
}

It looks a little bit icky at first glance, but it’s actually quite straightforward. You just need to accurately and painstakingly reconstruct the IL statement by statement. As you can see, I’ve left the original IL in the source code as comments, to make it clear what we’re trying to reproduce. It takes patience more than brains.

The final piece of the implementation puzzle is to implement a method for firing the event. Again, Silver.Needle tries to play along with any hand-written code you have. So if you have implemented a method so-and-so to do view notification, it’s quite likely that Silver.Needle will discover it and use it. Basically it will scan all methods in the inheritance chain for your view model, and assume that a method which accepts a single string parameter, returns void and calls PropertyChangedEventHandler.Invoke somewhere in the method body is indeed a notification method.


private static MethodDefinition FindNotificationMethod(
TypeDefinition typeDef,
bool includePrivateMethods = true)
{
foreach (var m in typeDef.Methods.Where(m => includePrivateMethods
|| m.Attributes.HasFlag(MethodAttributes.Public)))
{
if (IsProbableNotificationMethod(m))
{
return m;
}
}
var baseTypeRef = typeDef.BaseType;
if (baseTypeRef.FullName != "System.Object")
{
return FindNotificationMethod(baseTypeRef.Resolve(), false);
}
return null;
}
private static bool IsProbableNotificationMethod(
MethodDefinition methodDef)
{
return methodDef.HasBody
&& IsProbableNotificationMethodWithBody(methodDef);
}
private static bool IsProbableNotificationMethodWithBody(
MethodDefinition methodDef)
{
foreach (var insn in methodDef.Body.Instructions)
{
if (insn.OpCode == OpCodes.Callvirt)
{
var callee = (MethodReference) insn.Operand;
if (callee.Name == "Invoke"
&& callee.DeclaringType.Name == "PropertyChangedEventHandler")
{
return true;
}
}
}
return false;
}

Should Silver.Needle fail to identify an existing notification method, though, there is no problem. After all, it’s perfectly OK to have more than one method that can be used to fire the event. Hence if no notification method is found, one is injected. No sleep lost.

In case no existing notification method was found, we need to provide one. We’re getting used to this kind of code by now:


private MethodDefinition CreateNotificationMethodDefinition()
{
const string MethodName = "NotifyViewableProperty";
var methodDef = new MethodDefinition(MethodName,
MethodAttributes.Private |
MethodAttributes.HideBySig,
this._types.Void);
var paramDef = new ParameterDefinition("propertyName",
ParameterAttributes.None,
_types.String);
methodDef.Parameters.Add(paramDef);
methodDef.Body.MaxStackSize = 4;
var v = new VariableDefinition(_types.PropertyChangedEventHandler);
methodDef.Body.Variables.Add(v);
methodDef.Body.InitLocals = true;
var il = methodDef.Body.GetILProcessor();
Action<OpCode> op = x => il.Append(il.Create(x));
// IL_0000: ldarg.0
op(OpCodes.Ldarg_0);
// IL_0001: ldfld class [System]System.ComponentModel.PropertyChangedEventHandler Voodoo.ViewModel.GoalViewModel::PropertyChanged
var eventHandlerFieldDef = FindEventFieldDeclaration(_typeDef);
il.Append(il.Create(OpCodes.Ldfld, eventHandlerFieldDef));
// IL_0006: stloc.0
op(OpCodes.Stloc_0);
// IL_0007: ldloc.0
op(OpCodes.Ldloc_0);
//IL_0008: brfalse.s IL_0017
var jumpTargetInsn = il.Create(OpCodes.Ret); // See below, IL_0017
il.Append(il.Create(OpCodes.Brfalse_S, jumpTargetInsn));
// IL_000a: ldloc.0
op(OpCodes.Ldloc_0);
// IL_000b: ldarg.0
op(OpCodes.Ldarg_0);
// IL_000c: ldarg.1
op(OpCodes.Ldarg_1);
// IL_000d: newobj instance void [System]System.ComponentModel.PropertyChangedEventArgs::.ctor(string)
var eventArgsTypeRef = _types.PropertyChangedEventArgs;
var ctorRef = new MethodReference(".ctor",
_types.Void,
eventArgsTypeRef);
var ctorParamDef = new ParameterDefinition("propertyName",
ParameterAttributes.None,
_types.String);
ctorRef.Parameters.Add(ctorParamDef);
ctorRef.HasThis = true;
il.Append(il.Create(OpCodes.Newobj, ctorRef));
// IL_0012: callvirt instance void [System]System.ComponentModel.PropertyChangedEventHandler::Invoke(object, class [System]System.ComponentModel.PropertyChangedEventArgs)
var invokeMethodRef = new MethodReference("Invoke",
_types.Void,
_types.PropertyChangedEventHandler);
invokeMethodRef.Parameters.Add(
new ParameterDefinition(_types.Object));
invokeMethodRef.Parameters.Add(
new ParameterDefinition(eventArgsTypeRef));
invokeMethodRef.HasThis = true;
il.Append(il.Create(OpCodes.Callvirt, invokeMethodRef));
// IL_0017: ret
il.Append(jumpTargetInsn);
return methodDef;
}

This produces IL for a NotifyViewableProperty method just like the one we wrote in C# in the “hand-implemented” PersonViewModel above.

Injecting notification

With the interface implementation and notification method in place, we finally come to the fun part – injecting the property notification itself!

Unless you’re the kind of person who use ILSpy or ILDasm regularly, you might wonder if and how it will work with auto-properties – properties where you don’t actually provide any body for the getters and setters. Well, it doesn’t matter. Auto-properties are a C# feature, they don’t exist in IL. So you’ll find there’s a backing field there (albeit with a weird name) that the C# compiler conjured up for you. It’s just syntactic sugar to reduce typing.

What about get-only properties? That is, properties that have getters but no setters? Well, first of all, can they change? Even if they’re get-only? Sure they can. Say you have a property which derives its value from another property. For instance, you might have an Age property which depends upon a BirthDate property, like so:


private DateTime _birthDate;
public DateTime BirthDate
{
get { return _birthDate; }
set { _birthDate = value; }
}
[Viewable]
public int Age
{
get { return DateTime.Now.Years BirthDate.Years; }
}

view raw

BirthDate.cs

hosted with ❤ by GitHub

In the (admittedly unlikely) scenario that the BirthDate changes, the Age will change too. And if Age is a property on a view model that a view will bind to, you’ll want any display of Age to update itself automatically whenever BirthDate changes. How can we do that? Well, if we implemented this by hand, we could add a notification call in BirthDate‘s setter to say that Age changed.


private DateTime _birthDate;
public DateTime BirthDate
{
get { return _birthDate; }
set
{
_birthDate = value;
Notify("Age");
}
}

It feels a little iffy, since it sort of goes the wrong way – the observed knowing about the observer rather than the other way around. But that’s how you’d do it.

Silver.Needle does the same thing for you automatically. That is, for get-only properties, Silver.Needle will inspect the getter to find any calls to getters on other properties on the same object instance. If those properties turn out to have setters, notifications to update the get-only property will be injected there. If those properties are get-only too, the process repeats itself recursively. So you could have chains of properties that depend on properties that depend on properties etc.

To do this correctly, the injection process has two steps. First, we identify which properties depend on which, second, we do the actual IL manipulation to insert the notification calls.

So, first we identify dependencies between properties. In the normal case of a property with a setter of its own, the property will simply depend on itself. (Of course, there might be other properties that depend on it as well.) So for each property with a setter, we build a list of dependent properties – that is, properties that we need to inject notification calls for. Note that while we only do notification for properties tagged as Viewable, we might inject notification calls into the setters of any property on the view model, Viewable or not. (In the example above, you’ll notice that BirthDate is not, in fact, tagged Viewable. When the setter is called, it will announce that Age changed, but not itself!)

The code to register the dependencies between properties is as follows:


private void HandlePropertyToNotify(PropertyDefinition prop)
{
foreach (var affector in FindAffectingProperties(prop, new List<string>()))
{
AddDependency(affector, prop);
}
}
private void AddDependency(PropertyDefinition key,
PropertyDefinition value)
{
if (!_map.ContainsKey(key))
{
_map[key] = new List<PropertyDefinition>();
}
_map[key].Add(value);
}
private List<PropertyDefinition> FindAffectingProperties(
PropertyDefinition prop,
IList<string> seen)
{
if (seen.Any(n => n == prop.Name))
{
return new List<PropertyDefinition>();
}
seen.Add(prop.Name);
if (prop.SetMethod != null)
{
return new List<PropertyDefinition> {prop};
}
if (prop.GetMethod != null)
{
return FindAffectingPropertiesFromGetter(prop.GetMethod, seen);
}
return new List<PropertyDefinition>();
}
private List<PropertyDefinition> FindAffectingPropertiesFromGetter(
MethodDefinition getter,
IList<string> seen)
{
var result = new List<PropertyDefinition>();
foreach (var insn in getter.Body.Instructions)
{
if (insn.OpCode == OpCodes.Call)
{
var methodRef = (MethodReference)insn.Operand;
if (methodRef.Name.StartsWith(PropertyGetterPrefix))
{
// Found an affecting getter inside the current getter!
// Get list of dependencies from this getter.
string affectingPropName = methodRef.Name
.Substring(PropertyGetterPrefix.Length);
var affectingProp = _typeDef.Properties
.FirstOrDefault(p => p.Name == affectingPropName);
if (affectingProp != null)
{
result.AddRange(FindAffectingProperties(affectingProp, seen));
}
}
}
}
return result;
}

So you can see that it’s a recursive process to walk the dependency graph for a get-only property. You’ll notice that there is some code there to recognize that we’ve seen a certain property before, to avoid infinite loops when walking the graph. Of course, it might happen that we don’t find any setters to inject notification into. For instance, it may turn out that a viewable property actually depends on constant values only. In that case, Silver.Needle will simply give up, since there is no place to inject the notification.

When we have the complete list of properties and dependant properties, we can do the actual IL manipulation. That is, for each affecting property, we can inject notifications for all affected properties.

There are two possible strategies for the injection itself: simple and sophisticated. The simple strategy employed by Silver.Needle is to do notification regardless of whether any state change occurs as a result of calling the property setter. For instance, you might have some guard clause deciding whether or not to actually update the field backing the property – a conditional setter if you will. Perhaps you want to write to the backing field only when the value has actually changed. Silver.Needle doesn’t care about that. If the setter is called the view is notified. I believe this makes sense, since the setter is the abstraction boundary for the operation you’re performing, not whatever backing field you may or may not write to. Also, I reckon that it doesn’t *hurt* much to do a couple of superfluous view refreshes.

It would be entirely possible to do something a little bit more sophisticated, though – I just don’t think it’s worth the effort (plus it violates encapsulation, doesn’t it?). If we wanted to, we could use a simple program analysis to distinguish between paths that may or may not result in the view model altering state. Technically, we could take the presence of a stfld IL instruction (which stores a value to a field) as evidence for state change. We could even throw in a little bit of data flow analysis to see if the value passed to the setter was actually on the stack when to the stfld was executed. In that case, we’d interpret “property change” to mean “some field acquires the value passed to the setter”, which may or may not seem right to you. So it could be done, within reason.

Notice, though, the appeal to reason. It’s easy to come up with a setter which results in an observable state change without ever calling stfld. For instance, you could push the value onto a stack instead of storing it in a field, and have the getter return the top element of the stack. Sort of contrived, but it could be done. Or you could pass the value to some method, which may or may not store it somewhere. So you see, it’s hard to do properly in the general case. Hence Silver.Needle keeps things simple, and says that the view should be notified of property change whenever the setter is called. That way, we might do a couple of superfluous notifications, but at least we don’t miss any.

Now we just need to figure out where to inject the notification calls. Obviously it needs to be the last thing you do in the setter, to ensure that any state change has actually occurred before we do the notification (otherwise we’d refresh the view to show a stale property value!). That’s easy if you have a single return point from your setter, somewhat harder if there are are several.

You could of course inject notification calls before each return point. That would give the correct semantics but is a bit brutish and not particularly elegant. Instead, Silver.Needle will essentially perform an extract method refactoring if there is more than one return point. The original body of the property setter is moved to a new method with a name derived from the property name. The property setter is then given a new body, consisting of a call to the new method, followed by necessary notification calls. Nice and tidy.

A third alternative would be to wrap the body of the setter in a try block and perform notification in a finally block. Yes, that would mean that notifications would be given even if an exception is thrown during the execution of the setter. Would that be a problem? No. Why not? Because you shouldn’t throw exceptions in your setters. Again, if you have complex logic in the setters of your view models, I have a meme for you: “View models, you’re doing it wrong”.

So, implementation-wise, we need to support two scenarios: with or without refactoring. In either case, we end up with a setter that has a single return point preceeded by notification calls. As usual, it’s pretty straight-forward to do the necessary alternations to the body of the setter using Mono.Cecil. Here’s
the code:


private void InjectNotification(MethodDefinition methodDef,
IEnumerable<string> propNames)
{
if (_notifyMethodDef == null || methodDef == null)
{
return;
}
if (HasMultipleReturnPoints(methodDef))
{
RefactorSetterAndInjectNotification(methodDef, propNames);
}
else
{
InjectNotificationDirectly(methodDef, propNames);
}
}
private bool HasMultipleReturnPoints(MethodDefinition methodDef)
{
return methodDef.Body.Instructions.Count(
insn => insn.OpCode == OpCodes.Ret) > 1;
}
private void RefactorSetterAndInjectNotification(
MethodDefinition oldMethodDef,
IEnumerable<string> propNames)
{
var methodName = "Refactored" + oldMethodDef.Name
.Substring(PropertySetterPrefix.Length) + "Setter";
var methodDef = new MethodDefinition(methodName,
oldMethodDef.Attributes,
oldMethodDef.ReturnType);
foreach (var oldParamDef in oldMethodDef.Parameters)
{
var paramDef = new ParameterDefinition(
oldParamDef.Name,
oldParamDef.Attributes,
oldParamDef.ParameterType);
methodDef.Parameters.Add(paramDef);
}
methodDef.Body = oldMethodDef.Body;
_typeDef.Methods.Add(methodDef);
oldMethodDef.Body = new MethodBody(oldMethodDef);
var il = oldMethodDef.Body.GetILProcessor();
Action<OpCode> op = x => il.Append(il.Create(x));
op(OpCodes.Ldarg_0);
op(OpCodes.Ldarg_1);
il.Append(il.Create(OpCodes.Call, methodDef));
op(OpCodes.Ret);
InjectNotificationDirectly(oldMethodDef, propNames);
}
private void InjectNotificationDirectly(MethodDefinition methodDef,
IEnumerable<string> propNames)
{
var il = methodDef.Body.GetILProcessor();
var returnInsn = il.Body.Instructions.Last();
foreach (var s in propNames)
{
var loadThis = il.Create(OpCodes.Ldarg_0);
var loadString = il.Create(OpCodes.Ldstr, s);
var callMethod = il.Create(OpCodes.Call, _notifyMethodDef);
il.InsertBefore(returnInsn, loadThis);
il.InsertBefore(returnInsn, loadString);
il.InsertBefore(returnInsn, callMethod);
}
}

The code isn’t too complicated. The MethodDefinition passed to InjectionNotification is the setter method for the property, and propNames contains the names of properties to notify change for when the setter is called. In case of multiple return points from the setter, we perform a bit of crude surgery to separate the method body from the method declaration. We provide a new method definition for the body, with a name derived from the name of the property. While in Dr Frankenstein mode, we proceed to assemble a new method body for the setter. That body consists of three instructions: push the this reference onto the stack, push the value passed to the setter onto the stack, and invoke the new method we just created out of the original method body.

Now we know that the setter has a single return point, and we can inject the notification calls. We just need to loop over the properties to notify, and inject a trio of 1) push this, 2) push property name and 3) invoke notification method for each.

And that’s it, really. We’re done. Mission accomplished, view model complete.

Of course, to make things practical, you’re gonna need a build task and a Visual Studio template as well. I’ll get to that some day.


Recursion for kids

Consider the following problem:

The field vole can have up to 18 litters (batches of offspring) each year, each litter contains up to 8 children. The newborn voles may have offspring of their own after 25 days. How many field voles can a family grow to during the course of a year?

Of course, unless you’re a native English speaker, you might wonder what the heck a field vole is. I know I did.

This a field vole:

Field-vole-500px-border

I’m not really sure if it’s technically a mouse or just a really close relative, but for all our intents and purposes, it sure is. A small, very reproductive mouse.

So, do you have an answer to the problem? No?

To provide a bit of background: this problem was presented to a class of fifth graders. Does that motivate you? Do you have an answer now?

If you do, that’s great, but if you don’t, you probably have a whole litter of questions instead. That’s OK too.

You see, the father of one of those fifth graders is a friend of mine. He emailed this problem to a rather eclectic group of people (including some with PhDs in matematics). Between us, we came up with a list of questions including these:

  • What is the distribution of sexes among the voles?
  • What is the average number of voles per litter? And the distribution?
  • How many voles are gay?
  • How many voles die before they reach a fertile age?
  • How many voles are celibate? Alternatively, how many voles prefer to live without offspring? (Given that voles don’t use prophylactics, these questions yield equivalent results.)
  • Will ugly voles get laid?
  • What is the cheese supply like?
  • Are there cats in the vicinity?

And so on and so forth. Luckily, the fifth grade teacher was able to come up with some constraints for us. Of course, they were rather arbitrary, but perhaps not completely unreasonable:

Each litter contains exactly 8 new voles, 4 females and 4 males. No voles die during the year in question.

That’s great! Given these constraints, we can get to work on a solution.

First, we make the arbitrary choice of associating the offspring with the female voles only. The male voles will be counted as having no offspring at all. While perhaps a bit old fashioned, this greatly simplifies our task. (Of course, we could just as well have opted for the opposite.)

Now we just need to count the offspring of female voles. Since we know that the offspring function is purely deterministic, this isn’t too hard. Given a certain number of days available for reproduction, a female vole we will always yield the same number of offspring. (As if women were idempotent!)

To calculate an answer, we can write a small program.


public class Voles
{
private static int _daysBeforeFirst = 25;
private static int _daysBetween = 20;
private static Dictionary<int, long> _cache =
new Dictionary<int, long>();
public static long F(int days) {
if (!_cache.ContainsKey(days)) {
_cache[days] = F0(days);
}
return _cache[days];
}
private static long F0(int days) {
int end = days _daysBeforeFirst;
if (end < 0) {
return 1;
}
int start = end % _daysBetween;
long count = 0;
for (int d = start; d <= end; d += _daysBetween) {
count += F(d) + 1;
}
return 1 + 4 * count;
}
}

view raw

Voles.cs

hosted with ❤ by GitHub

The F method calculates the total number of offspring for a female vole as a function of how many days it has lived. If you call F with an input of 365 days, you’ll find that the answer is 55,784,398,225. That’s a lot of voles.

How does the algorithm work, though? Well, we assume that we start with a single newborn female vole that has 365 days available to produce offspring (with the first litter arriving after 25 days). Then the number of offspring is given by:

F(365) = 1 + 4 * F(340) + 4 + 4 * F(320) + 4 + … + 4 * F(0) + 4

Of course, you can factor out all the 4’s, like so:

F(365) = 1 + 4 * (F(340) + 1 + F(320 + 1 + … + F(0) + 1)

And that’s pretty much what the code does. In addition, it uses a cache, so that it won’t have to calculate a value twice.

As you might imagine, the kids weren’t really expected to come up with a solution to this problem. Instead, they were supposed to think about recursion and reasonable constraints. Which are noble things to teach kids, for sure. More of that, please.

Nevertheless, I still think the problem kinda sucked. Even if the kids were able to come up with reasonable constraints, they wouldn’t have the tools at hand to produce an answer. Pretty demotivating, I’d say.

My friend’s son was unfazed and cool about it, though. In fact, he was content and confident that the tree structure he started drawing would yield the correct answer, if only he had a sufficiently large piece of paper. How cool is that?


Pix-It Curling

I mentioned cURL in passing in the last blog post. In case you haven’t heard of it: it is a super-useful tool you can use to issue HTTP requests from the command line (and a whole slew of other stuff). Just thought I’d jot down a quick note on how to use it to play around with the pix-it HTTP handler.

It’s a breeze, really. So effortless! If you’ve installed cURL and vim (and added them to your PATH), you can do the whole command-line ninja thing and never take your fingers off the keyboard. Fiddler is a bit more -uh- fiddly for in that respect.

Here’s the work flow in it’s entirety. Repeat as necessary.

Command-prompt-workflow

Bix-It: Pix-It in the Browser

The previous blog post introduced PixItHandler, a custom HTTP handler for ASP.NET. The handler responds to HTTP POST requests containing a JSON description of a 8-bit style image with an actual PNG image. Provided you know the expected JSON format, it’s pretty easy to use a tool like Fiddler (or cURL for that matter) to generate renderings of your favorite retro game characters. However, while you might (and should) find those tools on the machine of a web developer, they have somewhat limited penetration among more conventional users. Web browsers have better reach, if you will.

So a challenge remains before the PixItHandler is ready to take over the world. Say we wanted to include a pix-it image in a regular HTML page? That is, we would like the user to make the request from a plain ol’ web browser, and use it to display the resulting image to the user. We can’t just use an HTML img tag as we normally would, since it issues an HTTP GET request for the resource specified in the src attribute. Moreover, we lack a way of including the JSON payload with the request. We can use another approach though. Using JQuery, we can issue the appropriate POST request with the JSON payload to the HTTP handler. So that means we’re halfway there.

We’re not quite done, though. We still need to figure out what to do with the response. The HTTP response from the PixItHandler is a binary file – it’s not something you can easily inject into the DOM for rendering. So that’s our next challenge.

Luckily, a little-known HTML feature called the data URI scheme comes to the rescue! Basically, data URIs allow you to jam a blob of binary data representing a resource in where you’d normally put the URI for that resource. So in our case, we can use a data URI in the src attribute of our img tag. To do so, we must base64-encode the PNG image and prefix it with some appropriate incantations identifying the text string as a data URI. Base64-encoding is straightforward to do, and there are JavaScript implementations you could steal right off the Internet. Good stuff.

You might think I’d declare victory at this point, but there’s one more obstacle in our way. Unfortunately, it seems that JQuery isn’t entirely happy funnelling the binary response through to us. Loading up binary data isn’t really the scenario the XMLHttpRequest object was designed to support, and so different browsers may or may not allow this to proceed smoothly. I haven’t really gone down to the bottom of the rabbit hole on this issue, because there’s a much simpler solution available: do the base64-encoding server side and pass the image data as text. So I’ve written a BixItHandler which is almost identical to the PixItHandler, except it base64-encodes the result before writing it to the response stream:


private static void WriteResponse(
HttpResponse response,
byte[] buffer)
{
response.ContentType = "plain/text";
response.Write(Convert.ToBase64String(buffer));
response.Flush();
}

Problem solved! Now we can easily create an HTML page with some JQuery to showcase our pix-it images. Here’s one way to do it:


<html>
<head>
<title>Invaders!</title>
<style type="text/css">
.invader { visibility: hidden }
</style>
</head>
<body>
<div class="invader">#990000</div>
<div class="invader">#009900</div>
<div class="invader">#000099</div>
</body>
<script type="text/javascript"
src="scripts/json2.js"></script>
<script type="text/javascript"
src="scripts/jquery-1.6.4.min.js"></script>
<script type="text/javascript"
src="scripts/pixit.js"></script>
<script type="text/javascript">
$(document).ready(PixIt.load);
</script>
</html>

view raw

invaders.html

hosted with ❤ by GitHub

Not much going on in the HTML file, as you can see. Three innocuous-looking div‘s that aren’t even visible yet, that’s all. As you might imagine, they are just placeholders that our JavaScript code can work with. That’s where pixit.js comes in:


var PixIt = {
load : function () {
var j = {
"pixelsWide": 13,
"pixelsHigh": 10,
"pixelSize": 8,
"payload":
[
{
"color": '#000000',
"pixels":
[
[1, 5], [1, 6], [1, 7], [2, 4],
[2, 5], [3, 1], [3, 3], [3, 4],
[3, 5], [3, 6], [3, 7], [4, 2],
[4, 3], [4, 5], [4, 6], [4, 8],
[5, 3], [5, 4], [5, 5], [5, 6],
[5, 8], [6, 3], [6, 4], [6, 5],
[6, 6], [7, 3], [7, 4], [7, 5],
[7, 6], [7, 8], [8, 2], [8, 3],
[8, 5], [8, 6], [8, 8], [9, 1],
[9, 3], [9, 4], [9, 5], [9, 6],
[9, 7], [10, 4], [10, 5],
[11, 5], [11, 6], [11, 7]
]
}
]
};
$('div.invader').each(function (index) {
var inv = $(this);
j.payload[0].color = inv.text();
$.ajax({
type: 'POST',
url: "http://localhost:52984/bix.it&quot;,
contentType: "application/json; charset=utf-8",
accepts: "plain/text",
dataType: "text",
data: JSON.stringify(j),
success: function (d) {
var src = "data:image/png;base64," + d;
inv.html('<img src="' + src + '"/>');
inv.css('visibility', 'visible');
}
});
});
}
}

view raw

pixit.js

hosted with ❤ by GitHub

As you can see, we define the basic outline for a space invader as static JSON data in the script. For each of the div tags, we hijack the color code inside and use that to override the color for the space invader. Then we issue the POST request to our brand new BixItHandler, which has been configured to capture requests aimed at the bix.it virtual resource. The response is a base64-encoded PNG file, which we then insert into the src attribute of an img element that we conjure up on the fly.

And how does it look?

Invaders-in-the-browser

Pix-It War!

So-called custom HTTP handlers can be incredibly useful. It’s almost laughably easy to write your own handler, and it enables some scenarios that might be difficult, cumbersome or inelegant to support otherwise. It’s definitely something you’ll want in your repertoire if you’re an ASP.NET programmer.

In essence, what a custom HTTP handler gives you is the ability to respond to an HTTP request by creating arbitrary content on the fly and have it pushed out to the client making the request. This content could be any type of file you like. In theory it could be HTML for the browser to render, but it typically won’t be (you have regular ASP.NET pages for that, remember?). Rather, you’ll have some way of magically conjuring up some binary artefact, such as an image or a PDF document. You could take various kinds of input to your spell, such as data submitted with the request, data from a database, the phase of the moon or what have you.

I’m sure you can imagine all kinds of useful things you might do with a custom HTTP handler. On a recent project, I used it to generate a so-called bullet graph to represent the state of a project. If you use ASP.NET’s chart control, you’ll notice that it too is built around a custom HTTP handler. Any time you need a graphical representation that changes over time, you should think of writing a custom HTTP handler to do the job.

Of course, you can use custom HTTP handlers to do rather quirky things as well, just for kicks. Which brings us to the meat of this blog post.

This particular HTTP handler is inspired by the phenomenon known as Post-It War, show-casing post-it note renderings of images (often retro game characters). Unfortunately, I’m too lazy to create actual, physical post-it figures, so I figured I’d let the ASP.NET pipeline do the heavy lifting for me. I am therefore happy to present you with a custom HTTP handler to produce 8-bit-style images, using a simple JSON interface. Bet you didn’t know you needed that.

The basic idea is for the user to POST a description of the image as JSON, and have the HTTP handler turn it into an actual image. Turns out it’s really easy to do.

We’ll let the user specify the number of coarse-grained “pixels” in two dimensions, as well as the size of the “pixels”. Based on this, the HTTP handler lays out a grid constituting the image. By default, all pixels are transparent, except those that are explicitly associated with a color. For each color, we indicate coordinates for the corresponding thus-colored pixels within the grid.

So, say we wanted to draw something simple, like the canonical space invader. We’ll draw a grid by hand, and fill in pixels as appropriate. A thirteen by ten pixel grid will do nicely.

Invader-skisse-w350

We can glean the appropriate pixels to color black from the grid, which makes it rather easy to translate into a suitable JSON file. The result might be something like:


{
"pixelsWide": 13,
"pixelsHigh": 10,
"pixelSize": 20,
"payload":
[
{
"color" : '#000000',
"pixels" :
[
[1, 5], [1, 6], [1, 7], [2, 4],
[2, 5], [3, 1], [3, 3], [3, 4],
[3, 5], [3, 6], [3, 7], [4, 2],
[4, 3], [4, 5], [4, 6], [4, 8],
[5, 3], [5, 4], [5, 5], [5, 6],
[5, 8], [6, 3], [6, 4], [6, 5],
[6, 6], [7, 3], [7, 4], [7, 5],
[7, 6], [7, 8], [8, 2], [8, 3],
[8, 5], [8, 6], [8, 8], [9, 1],
[9, 3], [9, 4], [9, 5], [9, 6],
[9, 7], [10, 4], [10, 5],
[11, 5], [11, 6], [11, 7]
]
}
]
}

view raw

invader.json

hosted with ❤ by GitHub

Is this the optimal textual format for describing a retro-style coarse-pixeled image? Of course not, I made it up in ten seconds (about the same time Brendan Eich spent designing and implementing JavaScript, or so I hear). Is it good enough for this blog post? Aaaabsolutely. But the proof of the pudding is in the eating. So let’s eat!

We’ve established that the user will be posting data like this to our custom HTTP handler; our task is to translate it into an image and feed it into the response stream. Most of this work can be done without considering the context of an HTTP handler at all. The HTTP handler is just there to expose the functionality over the web, really. It’s almost like a web service.

Unfortunately, getting the JSON from the POST request is more cumbersome than it should be. I had to reach for the request’s input stream in order to get to the data. Once you get the JSON, however, the rest is smooth sailing.

I use Json.NET‘s useful Linq-to-JSON capability to coerce the JSON into a .NET object, which I feed to the image-producing method. The Linq-to-JSON code is pretty simple, once you get used to the API:


private static PixItData ToPixItData(string json)
{
JObject o = JObject.Parse(json);
int size = o.SelectToken("pixelSize").Value<int>();
int wide = o.SelectToken("pixelsWide").Value<int>();
int high = o.SelectToken("pixelsHigh").Value<int>();
JToken bg = o.SelectToken("background");
Color? bgColor = null;
if (bg != null)
{
string bgStr = bg.Value<string>();
bgColor = ColorTranslator.FromHtml(bgStr);
}
JToken payload = o.SelectToken("payload");
var dict = new Dictionary<Color, IEnumerable<Pixel>>();
foreach (var t in payload)
{
var list = new List<Pixel>();
foreach (var xyArray in t.SelectToken("pixels"))
{
int x = xyArray[0].Value<int>();
int y = xyArray[1].Value<int>();
list.Add(new Pixel(x, y));
}
string cs = t.SelectToken("color").Value<string>();
Color clr = ColorTranslator.FromHtml(cs);
dict[clr] = list;
}
return new PixItData(wide, high, size, dict);
}

view raw

ToPixItData.cs

hosted with ❤ by GitHub

You might be able to do this even simpler by using Json.NET’s deserialization support, but this works for me. There’s a little bit of fiddling in order to allow for the optional specification of a background color for the image.

The .NET type looks like this:


public class PixItData
{
private readonly Color? _bgColor;
private readonly int _pixelsWide;
private readonly int _pixelsHigh;
private readonly int _pixelSize;
private readonly Dictionary<Color, IEnumerable<Pixel>> _data;
public PixItData(Color? bgColor, int pixelsWide,
int pixelsHigh, int pixelSize,
Dictionary<Color, IEnumerable<Pixel>> data)
{
_bgColor = bgColor;
_pixelsWide = pixelsWide;
_pixelsHigh = pixelsHigh;
_pixelSize = pixelSize;
_data = data;
}
public Color? Background
{
get { return _bgColor; }
}
public int PixelsWide
{
get { return _pixelsWide; }
}
public int PixelsHigh
{
get { return _pixelsHigh; }
}
public int PixelSize
{
get { return _pixelSize; }
}
public Dictionary<Color, IEnumerable<Pixel>> Data
{
get { return _data; }
}
}

view raw

PixItData.cs

hosted with ❤ by GitHub

The Pixel type simply represents an (X, Y) coordinate in the grid:


public class Pixel
{
private readonly int _x;
private readonly int _y;
public Pixel(int x, int y)
{
_x = x;
_y = y;
}
public int X
{
get { return _x; }
}
public int Y
{
get { return _y; }
}
}

view raw

Pixel.cs

hosted with ❤ by GitHub

Creating the image is really, really simple. We start with a blank slate that’s either transparent or has the specified background color, and then we add colored squares to it as appropriate. Here’s how I do it:


private static Image CreateImage(PixItData data)
{
int width = data.PixelSize * data.PixelsWide;
int height = data.PixelSize * data.PixelsHigh;
var image = new Bitmap(width, height);
using (Graphics g = Graphics.FromImage(image))
{
if (data.Background.HasValue)
{
Color bgColor = data.Background.Value;
using (var brush = new SolidBrush(bgColor))
{
g.FillRectangle(brush, 0, 0,
data.PixelSize * data.PixelsWide,
data.PixelSize * data.PixelsHigh);
}
}
foreach (Color color in data.Data.Keys)
{
using (var brush = new SolidBrush(color))
{
foreach (Pixel p in data.Data[color])
{
g.FillRectangle(brush,
p.X*data.PixelSize,
p.Y*data.PixelSize,
data.PixelSize,
data.PixelSize);
}
}
}
}
return image;
}

view raw

CreateImage.cs

hosted with ❤ by GitHub

That’s just about all there is to it. The entire HTTP handler looks like this:


public class PixItHandler : IHttpHandler
{
public bool IsReusable
{
get { return true; }
}
public void ProcessRequest(HttpContext context)
{
string json = ReadJson(context.Request.InputStream);
var data = ToPixItData(json);
WriteResponse(context.Response, ToBuffer(CreateImage(data)));
}
private static PixItData ToPixItData(string json)
{
JObject o = JObject.Parse(json);
int size = o.SelectToken("pixelSize").Value<int>();
int wide = o.SelectToken("pixelsWide").Value<int>();
int high = o.SelectToken("pixelsHigh").Value<int>();
JToken bg = o.SelectToken("background");
Color? bgColor = null;
if (bg != null)
{
string bgStr = bg.Value<string>();
bgColor = ColorTranslator.FromHtml(bgStr);
}
JToken payload = o.SelectToken("payload");
var dict = new Dictionary<Color, IEnumerable<Pixel>>();
foreach (var token in payload)
{
var list = new List<Pixel>();
foreach (var xyArray in token.SelectToken("pixels"))
{
int x = xyArray[0].Value<int>();
int y = xyArray[1].Value<int>();
list.Add(new Pixel(x, y));
}
string cs = token.SelectToken("color").Value<string>();
Color clr = ColorTranslator.FromHtml(cs);
dict[clr] = list;
}
return new PixItData(wide, high, size, dict);
}
private static Image CreateImage(PixItData data)
{
int width = data.PixelSize * data.PixelsWide;
int height = data.PixelSize * data.PixelsHigh;
var image = new Bitmap(width, height);
using (Graphics g = Graphics.FromImage(image))
{
if (data.Background.HasValue)
{
Color bgColor = data.Background.Value;
using (var brush = new SolidBrush(bgColor))
{
g.FillRectangle(brush, 0, 0,
data.PixelSize * data.PixelsWide,
data.PixelSize * data.PixelsHigh);
}
}
foreach (Color color in data.Data.Keys)
{
using (var brush = new SolidBrush(color))
{
foreach (Pixel p in data.Data[color])
{
g.FillRectangle(brush,
p.X*data.PixelSize,
p.Y*data.PixelSize,
data.PixelSize,
data.PixelSize);
}
}
}
}
return image;
}
private static string ReadJson(Stream s)
{
s.Position = 0;
using (var inputStream = new StreamReader(s))
{
return inputStream.ReadToEnd();
}
}
private static byte[] ToBuffer(Image image)
{
using (var ms = new MemoryStream())
{
image.Save(ms, ImageFormat.Png);
return ms.ToArray();
}
}
private static void WriteResponse(HttpResponse response,
byte[] buffer)
{
response.ContentType = "image/png";
response.BinaryWrite(buffer);
response.Flush();
}
}

view raw

PixItHandler.cs

hosted with ❤ by GitHub

Warning: I take it for granted that you won’t put code this naïve into production, opening yourself up to denial-of-service attacks and what have you. It takes CPU and memory to produce images, you know.

Of course, as always when using a custom HTTP handler, we must add the handler to web.config. Like so:


<configuration>
<system.web>
<httpHandlers>
<add verb="POST" path="pix.it"
type="ample.code.pixit.PixItHandler, PixItHandler" />
</httpHandlers>
</system.web>
</configuration>

view raw

gistfile1.xml

hosted with ❤ by GitHub

Note that we restrict the HTTP verb to POST, since we need the JSON data to produce the image.

Now that we have the HTTP handler in place, we can try generating some images. A simple way to invoke the handler is to use Fiddler. Fiddler makes it very easy to build your own raw HTTP request, including a JSON payload. Just what we need. Let’s create a space invader!

All we need to do is add the appropriate headers and the JSON payload.

Fiddler-request-builder

The image only includes the headers for the request, but Fiddler also has a text area for the request body, which is where you’ll stick the JSON data.

The PNG-file returned by our HTTP handler looks like this:

Invader-black

Nice!

Of course, we could create more elaborate images, using more pixels and more colors. For instance, the following JSON could be used to evoke the memory of a certain anti-hero named Larry, hailing from the hey-day of Sierra On-Line.


{
"background": "#54FCFC",
"pixelsWide": 18,
"pixelsHigh": 36,
"pixelSize": 4,
"payload":
[
{
"color" : "#000000",
"pixels" :
[
[2, 19],
[3, 19],
[4, 3], [4, 4], [4, 5], [4, 6], [4, 7], [4, 8], [4, 9], [4, 33],
[5, 3], [5, 4], [5, 5], [5, 6], [5, 7], [5, 8], [5, 9], [5, 33],
[6, 2], [6, 3], [6, 4], [6, 5], [6, 6], [6, 7], [6, 8], [6, 9], [6, 10], [6, 11], [6, 33],
[7, 2], [7, 3], [7, 4], [7, 5], [7, 6], [7, 7], [7, 8], [7, 9], [7, 10], [7, 11], [7, 33],
[8, 2], [8, 3], [8, 4], [8, 5], [8, 6], [8, 12],
[9, 2], [9, 3], [9, 4], [9, 5], [9, 6], [9, 12],
[10, 2], [10, 3], [10, 12], [10, 13], [10, 14], [10, 15], [10, 18], [10, 19], [10, 20], [10, 21],
[11, 2], [11, 3], [11, 12], [11, 13], [11, 14], [11, 15], [11, 18], [11, 19], [11, 20], [11, 21],
[12, 3], [12, 5], [12, 33],
[13, 3], [13, 5], [13, 33],
[14, 33],
[15, 33]
]
},
{
"color" : "#A8A8A8",
"pixels" :
[
[2, 15], [2, 16], [2, 17], [2, 18],
[3, 15], [3, 16], [3, 17], [3, 18],
[4, 14], [4, 15], [4, 16],
[5, 14], [5, 15], [5, 16],
[6, 15], [6, 16], [6, 17],
[7, 15], [7, 16], [7, 17],
[8, 18], [8, 24],
[9, 18], [9, 24],
[10, 22], [10, 23], [10, 24],
[11, 22], [11, 23], [11, 24]
]
},
{
"color" : "#FFFFFF",
"pixels" :
[
[4, 30], [4, 31], [4, 32],
[5, 30], [5, 31], [5, 32],
[6, 12], [6, 13], [6, 14], [6, 18], [6, 19], [6, 20], [6, 21], [6, 22], [6, 23], [6, 27], [6, 28], [6, 29], [6, 30],
[7, 12], [7, 13], [7, 14], [7, 18], [7, 19], [7, 20], [7, 21], [7, 22], [7, 23], [7, 27], [7, 28], [7, 29], [7, 30],
[8, 13], [8, 14], [8, 15], [8, 16], [8, 17], [8, 19], [8, 20], [8, 21], [8, 22], [8, 23], [8, 25], [8, 26], [8, 27],
[9, 13], [9, 14], [9, 15], [9, 16], [9, 17], [9, 19], [9, 20], [9, 21], [9, 22], [9, 23], [9, 25], [9, 26], [9, 27],
[10, 16], [10, 17],
[11, 16], [11, 17],
[12, 16], [12, 17], [12, 24], [12, 25], [12, 26], [12, 27], [12, 28], [12, 29], [12, 30], [12, 31], [12, 32],
[13, 16], [13, 17], [13, 24], [13, 25], [13, 26], [13, 27], [13, 28], [13, 29], [13, 30], [13, 31], [13, 32]
]
},
{
"color" : "#FC5454",
"pixels" :
[
[4, 19], [4, 20],
[5, 19], [5, 20],
[8, 8], [8, 9], [8, 10], [8, 11],
[9, 8], [9, 9], [9, 10], [9, 11],
[10, 5], [10, 6], [10, 7], [10, 8], [10, 9], [10, 10],
[11, 5], [11, 6], [11, 7], [11, 8], [11, 9], [11, 10],
[12, 6], [12, 7], [12, 8], [12, 9],
[13, 6], [13, 7], [13, 8], [13, 9],
[14, 6], [14, 7], [14, 16], [14, 17],
[15, 6], [15, 7], [15, 16], [15, 17]
]
},
{
"color" : "#A80000",
"pixels" :
[
[8, 7],
[9, 7],
[10, 4], [11, 4], [12, 4], [13, 4]
]
}
]
}

view raw

laffer.json

hosted with ❤ by GitHub

The result?

Larry-4

You might say “that’s a big file to create a small image”, but that would be neglecting the greatness of the Laffer, and his impact on a generation of young adolescents.