Something for nothing

I thought I’d jot down some fairly obvious things about values in programs.

Say you have some value in your program. For instance, it could be a String, or a Thing. Then conceptually, each String belongs to the set of possible Strings, and each Thing belongs to the set of possible Things. Right?

Like so:

thing-string

Of course, you might even have something like a function from String to Thing or the other way around. That’s no different, they’re just values that belong to some set of possible values. Hard to draw, though.

In programs, this notion of sets of possible values is baked into the concept of types. So instead of saying that some value belongs to the set of possible Things, we say that it has type Thing or is of type Thing.

I wish that was all there was to it, but alas, it gets more complicated. Not only do we want to represent the presence of values in our programs, sometimes we want to represent the absence of values as well. The absence of a value isn’t necessarily an error. It could be, but it could be fine, too. There are many valid reasons why we might end up with absences flowing through our programs. So we need to represent them.

This is where null enters the picture.

In languages like C# and Java – any object-oriented language that carries DNA from Tony Hoare’s ALGOL W – the drawing of the sets above doesn’t map directly over to types. For each so-called reference type (object values that are accessed by means of references), there’s a twist. In addition to the set of possible values, each reference type also allows for the value null:

thing-string-null.png

It looks pretty similar, but the consequences for program semantics are significant.

The purpose of null is, of course, to represent the absence of a value of a given type. But now your type identifies a pretty weird set of possible values. In the case of Thing, for instance, you have all the legitimate actual Things, but also this weird thing that represents the absence of a Thing. So it’s decidedly not a Thing as such, yet it is of type Thing. It’s bizarre.

But it’s not just confusing to think about. It causes practical problems since null just doesn’t fit in. It’s a phony – a hologram that successfully fools the compiler, which is unable to distinguish between null and proper values of a given type. It’s nothing like the other values. Hence you need to think about it and worry about it all the time. Since it’s not really there, you can’t treat it like a proper value. You most decidedly can not invoke a method on it, which is sort of what you do with objects. The interpretation of null is radically different from the interpretation of all the other values of the same type. (Interestingly, it’s different in precisely the same way for all types: how null sticks out from legit Things mirrors how it sticks out from legit Strings and everything else)

But it gets worse. Once you’ve invited null into your home, there’s no way of getting rid of it! In other words, when you make null part of, say, the Thing type, you can no longer express the idea one of the actual, legit Things, not that spectral special “Thing”. There is no way you can say explicitly in your program that you know that a value is present. It’s all anecdotes and circumstance. You can obviously take a look at some value at any given time in your program and decide whether it’s a legit Thing or just an illusion, but it’s completely ephemeral. You’ve given your type system a sort of brain damage that prevents it from forming memories about absence and presence of values: you might check for null, but your program immediately forgets about it!

So much for reference types. What about so-called primitive values, like integers and booleans? Neither can be null in C# or Java. So the question is: lacking null, how do we represent the absence of a value?

Well, one hack is to think “Gee, I’m not really going to use all the possible values, so I can take one of them and pretend it’s sort of like null.” So instead of interpreting the value literally, you override the interpretation for some magic values. (Using -1 as a special value for integers is a classic, in the case where your legit values are non-negative.) The consequence is that you now have two kinds of values inside your type, operating at different semantic levels and being used for different purposes. Your set of values isn’t just a set of values anymore – it’s a conglomerate of conceptually different things.

This leaves us in a situation that’s similar to the reference type situation, except it’s ad-hoc, convention-based at best. In both cases, we have two things we’re trying to express. One thing is the presence or absence of a value. And the other thing is the set of possible values that value belongs to. These are distinct, orthogonal concerns. And usually, the word of wisdom in programming is to let distinct things be distinct, and avoid mixing things up.

So how can we do that? If we reject the temptation to put special values with special interpretation rules into the same bag as the legit values, what can we do?

If you’re a C# programmer, you’re thinking that you can use Nullable to represent the absence of a primitive value instead. This is true, and you should. Nullable is a wrapper around a primitive type, creating a new type that can have either null or a legit instance of the primitive type as value. On top of that, the the compiler works pretty to hard to blur the line between the underlying type and its Nullable counterpart, such as special syntax and implicit type conversion from T to Nullable<T>.

In a language with sum types, we can do something similar to Nullable, but in a completely generic way. What is a sum type? It is a composite type created by combining various classes of things into a single thing. Here’s an example:

type Utterance = Fee | Fie | Foe | Fum

So it’s sort of like an enumeration of variants. But it’s a rich man’s enumeration, since you can associate a variant with a value of another type. Like so:

type ThingOrNothing = Indeed of Thing | Nothing

This creates a type that distinguishes neatly between legit Things and the absence of a Thing. Either you’ll indeed have a Thing, or you’ll have nothing. And since absence stands to presence in the same way for all types, we can generalize it:

type Mayhaps<'T> = Indeed of 'T | Nothing

The nice thing is that we can now distinguish between the case where we might have a value or not and the case where we know we do have a value (provided our language doesn’t allow null, of course). In other words, we can have a function of type Mayhaps<Thing> -> Thing. This is a big deal! We can distinguish between the parts of our program that have to worry about absent values and the parts that don’t. We’ve fixed the brain damage, our program can form memories of the checks we’ve made. It’s a beautiful feat of surgery, enabled by sum types and the absence of null.

So sum types neatly solves this problem with representing absence and presence of values, on top of, and orthogonally to, the issue of defining a set of possible values for a given shape of data. What’s more, this is just one application of a general feature of the language. There is no need to complicate the language itself with special handling of absence. Instead, you’re likely to find something like Mayhaps in the standard library. In F# it’s called Option, in Elm it’s called Maybe. In languages like F# and Elm – any functional language that carries DNA from Robin Milner’s ML – you’ll find that you have both sum types and pattern matching. Pattern matching makes it very easy to work with the sum type variants.

The code you write to handle absence and presence follows certain patterns, which means you can create abstractions. For instance, say you have some function thingify of type String -> Thing, which takes a String and produces a Thing. Now suppose you’re given an Mayhaps<String> instead of a String. If you don’t have a String, you don’t get a Thing, but if indeed you do have a String, you’d like to apply thingify to get a Thing. Right?

Here’s how you might write it out, assuming F#-style syntax and pattern matching:

match dunnoCouldBe with 
| Indeed str -> Indeed (thingify str) 
| Nothing -> Nothing

This pattern is going to pop up again and again when you’re working with Mayhaps values, where the only thing that varies is the function you’d like to apply. So you can write a general function to handle this:

let quux (f : 'T -> 'U) (v : Mayhaps<'T>) : Mayhaps<'U> = 
  match v with 
  | Indeed t -> Indeed (f t) 
  | Nothing -> Nothing

Whenever you need to optionally transform an option value using some function, you just pass both of them to quux to handle it. But of course, chances are you’ll find that quux has already been written for you. In F#, it’s called Option.map. Because it map from one kind of Option value to another.

At this point, we’ve got the mechanics and practicalities of working with values that could be present or absent worked out. Now what should you do when you change your mind about where and how you handle the absence of a value? This is a design decision, even a business rule. These things change.

The short answer is that when these things change, you get a rippling change in type signatures in your program – from the place where you made a change, to the place where you handle the effect of the change. This is a good thing. This is the compiler pointing out where you need to do some work to make the change work as planned. That’s another benefit of treating the absence of values explicitly instead of mixing it up with the values themselves.

That’s all well and good, but what can you do if you’re a C# or Java programmer? What if your programming language has null and no sum types? Well, you could implement something similar to Mayhaps using the tools available to you.

Here’s a naive implementation written down very quickly, without a whole lot of thought put into it:

Now you can write code like this:

var foo = Mayhaps<string>.Nothing;
var bar = Mayhaps<string>.Indeed("lol");
var couldBeStrings = new[] { foo, bar };
var couldBeLengths = couldBeStrings.Select(it => it.Map(s => s.Length));

A better solution would be to use a library such as Succinc<T> to do the job for you.

Regardless of how you do it, however, it’s always going to be a bit clunky. What’s more is it won’t really solve our problem.

As you’ll recall, the problem with null is that you can’t escape from it. In a sense, what is missing isn’t Mayhaps. It’s the opposite. With null, everything is Mayhaps! We still don’t have a way to say that we know that the value is there. So perhaps a better solution is to implement the opposite? We could try. Here’s a very simple type that banishes null:

Now the question is – apart from being very clunky – does it work? And the depressing answer is: not really. It addresses the correct problem, but it fails for an obvious reason – how do you ensure that the Indeed value itself isn’t null? Put it inside another Indeed?

Implementing Indeed as a struct (that is, a value type) doesn’t work too great either. While a struct Indeed cannot be null, you can very easily obtain an uninitialized Indeed, for instance by getting the default value of the struct, which is always available. In that case, you would end up with an Indeed which wraps a null, which is unacceptable.

So I’m afraid it really is true. You can’t get rid of null once you’ve invited it in. It’s pretty annoying. I wish they hadn’t invited in null in the first place.


Pragmatism is poison

Yesterday I gave a lightning talk called “Pragmatism is Poison” at the Booster Conference in Bergen. This blog post is essentially that talk in written form.

The basic idea of the talk, and therefore of this blog post, is to launch a public attack on perhaps the most fundamental virtue of the software craftsman: pragmatism. The reason is that I think pragmatism has turned toxic, to the point where it causes more harm than good.

While I do believe that pragmatism was once a useful maxim to combat analysis paralysis and over-engineering, I also believe that the usefulness has expired. Pragmatism no longer represents a healthy attitude, in my mind it’s not even a word anymore. It has degenerated into a thought-terminating cliché, which is used to stifle discussion, keep inquiries at bay and to justify not thinking things through.

These are, of course, pretty harsh words. I’ll try to make my case though.

Let’s start by having a look at what it means to be pragmatic. The definition Google gave me is as follows:

Dealing with things sensibly and realistically in a way that is based on practical rather than theoretical considerations.

[A small caveat: this isn’t necessarily the only or “correct” definition of what it means to be pragmatic. But I suppose it’s a sensible and realistic approximation?]

Anyways: this is good by definition! How can it possibly be bad?

Well, for a whole bunch of reasons really. We can divide the problems in two parts: 1) that pragmatism lends itself to abuse because it’s ambiguous and subjective, and 2) all the forms that abuse can take.

So the underlying problem is that the definition relies heavily on subjective judgment. (Consider the task of devising a test that would determine whether or not someone was being pragmatic – the very idea is absurd!) One thing is that what qualifies as “realistic” and “sensible” is clearly subjective. For any group of people, you will find varying degrees of agreement and disagreement depending on who they are and the experiences they’ve had. But the distinction between “practical” and “theoretical” considerations is problematic as well.

Here are some quick considerations to -uh- consider:

#1 laws of nature change
#2 bomb hits data center
#3 cloud providers go down
#4 servers are hacked
#5 unlikely timing of events

I suppose we can agree that #1 is of theoretical interest only. What about the others? If #2 happens, maybe we have more serious problems that your service going down, but it depends on how critical that service is. The same goes for #3. With respect to #4, a lot of people seem surprised when their servers are hacked – as if they hadn’t thought it possible! And #5 depends on whether or not you’d like luck to be a part of your architecture. I’ve been on projects where people would tell me that scenarios I asked about were so unlikely that they were practically impossible. Which to me just means they’ll be hard to debug and reproduce when they happen in production.

The point, though, is that the distinction between “practical” and “theoretical” is pretty much arbitrary. Where do we draw the line? Who’s to say? But it’s important, because mislabeling important considerations – things that affect the quality of software! – as “theoretical” leads to bad software.

So that’s the underlying problem of ambiguity. On to the various forms of abuse!

The first is that pragmatism is often used to present a false dilemma in software development. We love our dichotomies! I could use cheap rhetoric to imply that it traces back to the 0s and 1s of our computers, but let’s not – I have no evidence for such a claim! Luckily I don’t need it either. Suffice it to say that the world is complex, and it’s always very tempting to see things in black and white instead. And so we pit “pragmatism” on the one hand against “dogmatism” on the other, and it’s really important to stay on the right side of that divide! Sometimes we use different words instead, like “practical” vs “theoretical” or “real world” vs “ivory tower”. It all means the same thing: “good” vs “bad”. Which is a big lie, because we’re not making ethical judgments, we’re trying to assess the pros and cons of different solutions to particular problems in concrete contexts. This isn’t The Lord of the Rings.

The consequences of this polarizing are pretty severe. The false dilemma is often used as a self-defense bluff in discussions between team members. The so-called impostor syndrome is rampant in our industry, and so we reach for tools that help us deal with insecurity. One such tool is pragmatism, which can be abused as a magical spell to turn insecurity on its head.

Here’s how it works. Because of the false dilemma, a claim to be pragmatic is implicitly an accusation that says that whoever disagrees is dogmatic: they’re not being sensible, they’re not being realistic, they’re just obsessing over theoretical considerations. So while a statement like “I’m being pragmatic” sounds innocuous enough, it’s really not. It leads to stupid, unrefined, pointless discussions where no knowledge is gained. Instead we’re fighting over who’s good and who’s bad. Polarizing does not make discussions more interesting, it makes them degenerate into banality.

A related strategy is to use pragmatism as a diversion or a smoke bomb, offering the confronted part with an easy exit and effectively ending the discussion. The reason is that it takes a lot of guts and perseverance to call someone’s bluff when they’re claiming to be pragmatic. You might approach your co-worker with a concern like “hey, I was looking at the code, and it seems like we’re blocking in our streaming API, which sort of defeats the purpose of a streaming API in the first place”. It sounds like a valid concern until your co-worker says the magic words “I was just being pragmatic” and vanishes in a puff of smoke, like so:

puffanimate

What we should do instead is accept the complexity we’re faced with and resist the urge to trivialize it. There’s always a need for thinking and discussion, and spurious claims of pragmatism don’t help.

Another problem with pragmatism is that it can be – and is – used as an excuse for sloppy thinking, or no thinking at all. Pragmatism encourages partial “solutions” that work not by reflecting a conceptual solution to a problem, but rather by mimicking correct behavior for various inputs. That way, we can short-circuit the need for design and collaboration. Instead we start with a trivial “happy path” solution and add flags and epicycles to flesh it out into richer behavior as needed. This approach yields software that more or less works for the inputs we’ve tried, and maybe for other inputs as well. (Behavior in the latter case is not as well understood, for obvious reasons.) If we come across inputs that cause problems, we apply patches in the form of additional flags and epicycles.

Because the approach sounds rather dubious when written out like that, we use the magic word “pragmatic” to make it sound better. It’s pragmatic problem-solving. We call the solutions themselves “good enough” solutions – wasting any more time thinking would be gold-plating!  Sometimes we use quotes like “perfect is the enemy of good” as further evidence that we’re doing a good thing – as if the problem we’re facing is too much perfect software in the world!

Here’s an obviously made-up example of this approach:

function square(x) = {
    if (x == 1) then return 1;
    if (x == 2) then return 4;
    if (x == 3) then return 9;
    if (x == 4) then return 15;
    if (x == 5) then return 25;
    // Should never happen.
    return -1;
 }

This is a function that computes the square of the integers 1-5, not by reflecting any understanding of what it means to square a number, but rather by emulating correct behavior. It has a small bug for the input number 4, but that doesn’t matter much, we rarely get that value, and the result isn’t too far off. It’s a perfectly pragmatic solution if you can assume that x will always be in the range 1-5.

A more general solution would be this:

function square(x) = {
    return x * x;
}

Which solves the general case of calculating the square of integers – sort of! Unfortunately, integers themselves are deeply pragmatic. (What happens when x * x is greater than the maximum value for integers?)

But these are all silly examples – theoretical considerations!

So let’s consider something more “real world”. Since software exists and executes in time, software typically needs to take time into account – by registering time stamps for various events and so forth.

How do you handle dates and times in your applications? Are you aware of the related complexities that exist? Do you handle those complexities explicitly? Do you think about how they might affect your application in various ways? Or do you simply close your eyes and hope for the best? Assume that all the systems you integrate with use time stamps from the same time zone? Assume that leap years and leap seconds won’t affect you (that all years have 365 days and all minutes have 60 seconds)? Assume that daylight savings time won’t cause any problems (even though it means that time isn’t linear – depending on your time zone(s), some points in time may not exist, whereas others may exist more than once)? Assume that everyone else around you are making the same assumptions? That’s mighty pragmatic!

Finally, pragmatism is sometimes used to create outright logical contradictions. Pragmatism is about compromise, but some compromises cannot be made without compromising the concept itself! For instance, some architectural properties have principles that cannot be violated while still maintaining the properties – they are simply no longer present due to the violation. (A vegetarian cannot eat meat in the weekends and still be a vegetarian, if you will.) Not even pragmatism can fix this, but that doesn’t stop people from trying!

To illustrate, here’s (a reproduction of) a funny meme I found on the Internet.

batman-rest

I think it’s funny because a lot of people seem to get annoyed when someone points out that their self-proclaimed RESTful APIs aren’t really RESTful because they violate some property of REST or other – typically the hypermedia constraint. People get annoyed because they don’t want to think about that, they’d rather be pragmatic and get on with stuff.

But for some reason they still want to keep that word, REST. Maybe they think it sounds good, or maybe they promised their manager a REST API, I don’t know. It doesn’t matter. They want to keep that word. And so they say “well, maybe it’s not your Ivory Tower REST” (implicitly bad!), “maybe it’s Pragmatic REST instead” (implicitly good!). And then they go on to do something like JSON over HTTP, which is really simple and great, and they can easily deserialize the JSON in their JavaScript, and they’ve practically shipped it already. And when someone comes along and talks about hypermedia being a requirement for REST, they just slap them! Pretty funny!

Here’s another meme. I made this one myself. Unfortunately it’s not funny.

batman-secure

Why isn’t it funny? It looks a lot like the previous one.

The problem is that when someone violates some established principle of security – maybe they decide it’s convenient to store encrypted passwords on their server or to roll their own cryptography – we think it’s a bad idea. And we don’t think it’s a good excuse to say “well, maybe it’s not Ivory Tower Security, maybe it’s Pragmatic Security instead”. We simply don’t agree that it’s very secure at all. So in some sense it’s not funny because the roles have been reversed. Turns out it’s much more funny being Batman than being Robin in this meme. Go figure.

Now we have the strange situation that it’s apparently OK for some words in software to have no meaning (REST), whereas we insist that others do have meaning (secure). For the meaningless words, prefixing “pragmatic” will absolve you from your sins, for the meaningful words, it will not. This is a problem, I think. Who’s to decide which words should have meaning?

Here’s a third meme. I made this one too.

batman-tea

It’s a bit strange, I’ll admit that. But bear with me. It’s the last one, I promise.

What would you say if someone offered you hot water and a biscuit and said “have some pragmatic tea”? Would you be content? Would you pay for a cup of pragmatic tea? Or would you take the route of the dogmatist and argue that it’s not really tea if no tea was involved in the preparation? Well, SLAP you! Crazy tea zealot! Hang out with the other ivory tower hipsters, and have your fancy tea! Who drinks tea anyway?!

At this point you can see I’ve gone absurd – but we’re still just doing variations of the same joke. I didn’t bring the absurdity, it was there all along. The point I’m trying to make is that words do have meaning, whether it’s REST or security or tea. We should respect that meaning instead of using pragmatism as a poor excuse for undermining it. Some properties have principles that cannot be broken while at the same time keeping the properties intact. You can’t have REST without Representational State Transfer, because that’s literally what the acronym means. Secure applications shouldn’t be storing passwords, even if they’re encrypted. Tea should contain tea. (Please don’t get me started on Rooibos.)

I should add that it’s perfectly fine to not care about REST, or to dispute the value of REST – or tea, or security, for that matter. Those are conversations worth having. It’s a free world, and everyone is entitled to choose the properties they care about, based on the context they’re in. Lots of people very vocally don’t care about REST, perhaps even more people than people who know what REST actually is! I have no problem with that. What’s less fine is pretending it has no meaning.

This concludes my attack on pragmatism, at least for now! To reiterate: pragmatism is easily abused because it’s hard to tell if someone is genuinely pragmatic or just claiming to be so. The abuse takes various forms: false dilemma, self-defense bluff, smoke bomb, justification for sloppy thinking and undermining of the meaning of words.

A call for action? Please stop using pragmatism as an excuse for doing sloppy work! And if you find you do need to use the word, please have it be the beginning of a discussion rather than the end of one.


Technical debt isn’t technical

TL;DR

Technical debt is not primarily caused by clumsy programming, it is a third-order effect of poor communication. Technical debt is a symptom of an underlying lack of appropriate abstractions, which in turn stems from insufficient modelling of the problem domain. This means that necessary communication has not taken place: discussions and decisions to resolve ambiguity and make informed trade-offs have been swept under the rug. Technical debt is the reification of this lack of resolution in code.

The technical debt meme

For a while now, I’ve been wanting to write about technical debt. As we all know, technical debt is a very successful meme in software development – it needs no introduction as a concept. Like any good virus, it has self-replicated and spread throughout the software development world, even reaching into the minds of project leaders and stakeholders. This is good, since the notion of technical debt brings attention to the fact that the internal quality of software matters – that there are aspects of software that are invisible to anyone but the programmers, but still have very visible effects – in the form of prolonged quality problems, missed deadlines, development grinding to a halt and so forth. For this reason, we should tip our hats to Ward Cunningham for coming up with the term. It gives us terminology that allows us to communicate better with non-technical stakeholders in software projects.

Why technical debt is a misnomer

That’s not what I want to write about however. What I want to say is that technical debt is also a deeply problematic notion, because it speaks little of the causes of technical debt, or how to fix them.

The usual story is that technical debt stems from project deadlines. If the code is inadequate, sloppy or otherwise “bad”, it has probably been written in a hurry because the project leader said so. This indicates that time is the cause of our problems, and also conveniently places the responsibility of the mess on someone else than the developers.

This is certainly true in some cases; we have all written code like that, and for those exact reasons. I just don’t think it’s the whole story, or even a major part of it. It seems to me entirely inadequate to explain the majority of technical debt that I’ve seen on software projects. The so-called technical problems go much deeper than mere sloppiness of implementation, and reveal fundamental problems in the process of understanding of the business domain and how that understanding is captured and represented in software. In particular, it is very common to see weak abstractions that fail to represent the richness of the domain. The code tends to be overgrown with conditional and flags, which indicates a weak model that has handled evolution and change very poorly – by ad hoc spouting of extra branches and the booleans needed to navigate them as appropriate for different use cases.  Complexity grows like ever new epicycles on the inadequate model – easily recognizable as things in your code that cannot be given meaningful names because they have no meaningful counterpart in the problem domain. The end result is a horrific steampunk contraption of accidental complexity.

This makes the code extraordinarily difficult to reason about. Hence, it would seem that the so-called technical debt really stems from modelling debt; the code lacks the higher-level concepts of a rich domain model that would make it possible to express the use cases more directly.

The currency of technical debt is knowledge. — @sarahmei

In DDD terms, modelling debt indicates that insufficient knowledge crunching has taken place. Knowledge crunching involves learning about the problem domain and capturing that knowledge in a suitable domain model. This is a communication-driven process that involves identifying and resolving ambiguity in the problem domain, and expressing the domain as clearly as possible. Most of all, it is a chaotic and messy process that involves people and discussion. Insufficient knowledge crunching in turn points towards the ultimate cause of technical debt: poor communication.

Communication is the principal portion of the “technical debt.” Messy code is just the ever-increasing interest. — @nycplayer

Why technical debt is misrepresented

So if technical debt isn’t really technical – or at least not ultimately caused by technical issues – why do we keep referring to it as technical debt? Unfortunately, it seems to me that developers have a tendency to look for technical solutions to soft problems.

Technical tasks are alluring because, unlike modelling and communication, they have no psychological dimensions, and tend not to lead to conflict. I don’t want to add to the stereotype of the programmer as particularly socially inept; suffice it to say that most people will prefer to avoid conflict if possible. Technical work is a series of puzzles to be solved. Modelling work uncovers human issues, differences of opinion, different focus, different hopes for the application, even personal conflicts. Figuring out what the application should do exposes all of these issues, and it is painful! This is why everyone is quoting Conway’s Law these days.

And so, even as widespread as the meme technical debt is, it seems to be poorly understood, even by developers – perhaps even particularly by developers! Indeed, the Wikipedia page for technical debt – no doubt authored by developers – currently lists 11 causes of technical debt, but lack of understanding of the problem domain is not one of them! “Lack of knowledge” sounds promising, until you read the explanation: “when the developer simply doesn’t know how to write elegant code”(!)

Again we see the focus on the technical aspects, as if technical debt were caused by clumsy, unskilled programmers with nagging, incompetent project leaders – and hence as if it were fixable by some virtuous programmer – a master craftsman, no less! – using generic, context-free principles like SOLID, dependency injection and patterns. It is not! Code hygiene is certainly a virtue, but it is no substitute for modelling, just like frantically washing your hands is not sufficient for successful surgery. Getting lost in code hygiene discussions is like arguing about the optimal kinds of soap and water temperature while the patient is dying on the operating table.

And yet it is indirectly true: a developer who doesn’t know the importance of understanding the problem domain, of proper modelling, will certainly fail to write elegant code. Elegance of the implementation can only stem from an elegant model that reflects a deep understanding of the problem addressed.

Down payment

This has deep ramifications, in particular in how we address technical debt. Refactoring is another successful meme in software development, and we often use it to describe the process of making down payments on technical debt. But if technical debt isn’t just clumsy code, if instead it is clumsy code caused by unresolved ambiguity in the problem domain, then it is poorly addressed by rearranging code. We need to start in the other end, with a better understanding of the problem we are trying to solve, and with modelling concepts permeating the code instead of branches and booleans. This is what Eric Evans calls “refactoring towards deeper insight”. Unless we have a model to drive our efforts, there is no reason to believe that we will be able to do much better than before. Refactoring without an improved domain model is just hubris.

A rewrite will end up with the same problems as the original unless you close the understanding gap. — @sarahmei

To conclude

That’s what I wanted to say about technical debt. It’s not very technical at all. It’s about code that gets bad because humans fail to communicate when trying to solve problems in some business domain using software. It usually is.

 


Diamond mirrors

My friend Bjørn Einar did a nice write-up about the Diamond code kata in F# the other day. He did so in the context of TDD-style evolutionary design vs up-front thinking away from the keyboard. Apparently he has this crazy idea that it might be worthwhile to do a bit of conceptual problem-solving and thinking about properties of the domain before you start typing. Very out of vogue, I know.

Anyways, he ended up with an interesting implementation centered on exploiting something called the taxicab norm. (I hadn’t heard of it either, which makes it all the more interesting.) I really like that approach: cast your problem as an instance of an existing, well-understood problem for which there exists a well-understood solution. It replaces ad-hoc code with a mathematical idea, and is rather a far step away from typical implementations that get heavy on string manipulations and where the solution to the problem in general is swamped with things related to outputting the diamond to the console.

I wondered if I could come up with an alternative approach, and hence I got to thinking a bit myself. Away from the keyboard, like a madman. The solution I came up is perhaps a bit more conventional, a bit less mathematical (I’m sorry to say), but still centered on a single idea. That idea is mirroring.

To illustrate the approach, consider a sample diamond built from five letters, A through E. It should look like the following:

....A....
...B.B...
..C...C..
.D.....D.
E.......E
.D.....D.
..C...C..
...B.B...
....A....

The mirroring is fairly obvious. One way to look at the diamond is to consider it as a pyramid mirrored along the E-row. But at the same time, it is also a pyramid mirrored along the A-column. So it goes both ways. This means that we could rather easily build our pyramid from just a quarter of the diamond, by mirroring it twice. We would start with just this:

A....
.B...
..C..
...D.
....E

We could then proceed by mirroring along the A-column to produce this:

....A....
...B.B...
..C...C..
.D.....D.
E.......E

And then we could complete the diamond by mirroring along the E-row, and it would look like the diamond we wanted.

So far so good. But we need the first quarter. How could we go about producing that?

Assume we start with a list ['A' .. 'E']. We would like to use that to produce this list:

But that’s rather easy. Each inner list is just the original list ['A' .. 'E'] with all letters except one replaced by ‘.’. That’s a job for map. Say I want to keep only the ‘B’:

And so on and so forth for each letter in the original list. We can use a list comprehension to generate all of them for us. For convenience, we’ll create a function genLists:

This gives us the first quarter. Now for the mirroring. That’s easy too:

(We’ll never actually call mirror with an empty list, but I think it’s better form to include it anyway.)

So now we can map the mirror function over the quarter diamond to produce a half diamond:

Excellent. Now we’re almost ready to do the second mirroring. The only problem is that the mirror function uses the head element as the pivot for mirroring, so we would end up with an X instead of a diamond!

That’s trivial to fix though. We’ll just reverse the list first, and then do the mirroring. I’m not even going to write up the result for that – it is obviously the completed diamond. Instead, here’s the complete diamond function, built from the parts we’ve seen so far:

Could I speed up things by reversing my lists before the first mapping instead of after? No, because the (outer) list has the same number of elements before and after the first mirroring. Plus it’s easier to explain this way. And really, perf optimization for a code kata? Come on!

Now for rendering:

And to run everything (for a full-sized diamond, because why not):

And that’s all there is to it. The entire code looks like this:


Aspects without Aspects

In the previous blog posts, we saw that we could hide the problematic concrete exception type from the C# compiler by tucking it inside a transformation from a closure of type Func<TR> to another closure of the same type. But of course we can use such transformations for many things besides exception handling. Any old behaviour that we would like to throw on top of the business logic, we can apply in layers using this approach.

This capability is so cool that I took a break from writing this blog post to share my enthusiasm with my wife. She was like, “what are you blogging about?”, and I was like “there’s this really cool thing you can do, where you apply this transformation to some method call, and then you can like, do additional stuff with it, entirely transparently!”, and she was like “like what?”, and I was like “like anything!”, and she was like “like what?”, and I was like “anything you want!”, but she was still like “like what though?” and then I turned more like “uh… uh… like say you had this method that returned a string – you could easily transform that into a method that looked exactly the same, but returned the reversed string instead”, and she was like “…the reversed string? why?” and I was like “or-or-or maybe you could return the uppercase string instead…?”, and she was like “uppercase?” with totally uppercase eyebrows and I was like “nonono! I got it! say you had this method that did a really expensive and slow computation, you could turn that into a method that would keep along the result that computation, so you didn’t have to do the actual computation all the time”, and she was like “oh, that’s cool” and I was like “phew! I’m never talking to her about blogging again!”.

So that was a close call. But yes, you can totally use this for caching. All we need is a suitable transformation thing.

Here, we’re taking advantage of the fact that C# has mutable closures – that is, that we can write to cached and t from inside the body of the lambda expression.

To verify that it works, we need a suitable example – something that’s really expensive and slow to compute. And as we all know, one of the most computationally intensive things we can do in a code example is to sleep:

Well, what kind of behaviour should we expect from this code? Obviously, the first three q calls will be slow. But what about the three last? The three last execute the caching closure instead. When we execute the forth call, cached is false, and so the if test fails, and we proceed to evaluate the original, non-caching q (which is slow), tuck away the result value for later, set the cached flag to true, and return the computed result – the hard-obtained string. But the fifth and sixth calls should be quick, since cached is now true, and we have a cached result value to return to the caller, without ever having to resort to the original q.

That’s theory. Here’s practice:

So that seems to work according to plan. What else can we do? We’ve seen exception handling in the previous posts and caching in this one – both examples of so-called “cross-cutting concerns” in our applications. Cross-cutting concerns was hot terminology ten years ago, when the enterprise world discovered the power of the meta-object protocol (without realizing it, of course). It did so in the guise of aspect-oriented programming, which carried with it a whole vocabulary besides the term “cross-cutting concerns” itself, including “advice” (additional behaviour to handle), “join points” (places in your code where the additional behaviour may be applied) and “pointcuts” (a way of specifying declaratively which join points the advice applies to). And indeed, we can use these transformations that we’ve been doing to implement a sort of poor man’s aspects.

Why a poor man’s aspects? What’s cheap about them? Well, we will be applying advice at various join points, but we won’t be supporting pointcuts to select them. Rather, we will be applying advice to the join points manually. Arguably, therefore, it’s not really aspects at all, and yet we get some of the same capabilities. That’s why we’ll call them aspects without aspects. Makes sense?

Let’s consider wrapping some closure f in a hypothetical try-finally-block, and see where we might want to add behaviour.

So we’ll create extension methods to add behaviour in those three places. We’ll call them Before, Success and After, respectively.

Note that we have two options for each of the join points that occur after the call to the original f closure. In some cases you might be interested in the value returned by f, in others you might not be.

How does it work in practice? Let’s look at a contrived example.

So here we have a transformation thing that takes a Func<string> closure and returns another Func<string> closure, with several pieces of advice applied. Can you work out when the different closures will be executed?

We start with some closure fn, but before fn executes, the first Before must execute (that’s why we call it Before!). Assuming both of these execute successfully (without throwing an exception), the Success will execute. But before all these things, the second Before must execute! And finally, regardless of how the execution turns out with respect to exceptions, the After should execute.

In the case of m1, no exception occurs, so we should see the message “Successfully obtained: Hello Kiczales!” in between “Executing m1…” and “What did I get? Hello Kiczales!”. In the case of m2, on the other hand, we do get an exception, so the Success closure is never executed.

A screenshot of my console verifies this:

Screen Shot 2014-07-04 at 10.59.07 PM

So we’ve seen that we can do fluent exception handling, caching and aspects without aspects using the same basic idea: we take something of type Func<TR> and produce something else of the same type. Of course, this means that we’re free to mix and match all of these things if we wanted to, and compose them all using Linq’s Aggregate method! For once, though, I think I’ll leave that as an exercise for the reader.

And of course, we can transform other things besides closures as well – we can use the same approach to transform any instance of type T to some other T instance. In fact, let’s declare a delegate to capture such a generalized concept:

Why Decorate? Well, nothing is ever new on this blog. I’m just rediscovering old ideas and reinventing flat tires as Alan Kay put it. In this case, it turns out that all we’ve been doing is looking at the good old Decorator pattern from the GoF book in a new or unfamiliar guise.


Rethrow recoil

The closure-based exception handling scheme in the previous blog post works almost perfectly, and it would have worked entirely perfectly if a catch block were an ordinary block of code. But alas, it is not. Not quite. A catch block is special in that there is one statement that you can make inside a catch block that you cannot make anywhere else in your program. Unfortunately, it is also a fairly common statement to use inside catch blocks: the rethrow statement – that is, a throw statement with no operand. We cannot simply use a throw statement with an operand instead, since that has different semantics. In fact, it’s not a rethrow at all, it’s a brand new throw (even when we’re using the exception we just caught). The consequence is that the original stack trace is lost. In other words, we lose trace of where the exception originally occurred, which is almost never what we want.

So that’s unfortunate. Unacceptable even. Can we fix it? Of course we can fix it! We’re programmers, we can fix anything!

There’s no way to put a rethrow into our lambda expession though, so we need to do something else. That something else turns out to be trivial. We do have a genuine catch-block available, so we’ll just put it there. Or rather, we’ll create a new try-catch block with a hard-coded rethrow inside, and put that block inside a new extension method which complements the one we created last time. Like so:

In this case, we use an Action<TE> instead of a Func<TE, TR>, because obviously the exception handler won’t be returning anything – we just hard-coded a rethrow in there!

Why Touch and not an overload of the Catch method we saw before? Well, first of all we’re not catching the exception, we’re merely touching it on the way through – hence Catch is not really a suitable name for what we’re doing. And besides, the following code snippet would be ambiguous, at least to the reader of the code. Assuming we had overloaded the Catch method, how should we interpret something like this?

Is that an Action<TE> or a Func<TE, TR>? I have no idea. It turns out that the compiler is OK with the ambivalence – it decides it must be an Action<TE> (which leaves us with a rather strange catch block where the throw ex statement in the handler terminates the block before the rethrow occurs). But I’m not! Better to be explicit. Touch it is.

Now we can write code like this:

So what happens in the last line? Well, if an ArgumentException is thrown, we’ll see the string “Something” written. If a NullReferenceException is thrown, however, we’ll see the “I saw you” string, in addition to “ex”, since the exception percolates up to the outer handler where it is caught and swallowed.

Fixed!


Linq to Exceptions

Lately I’ve been thinking about the mundane task of exception handling. Remarkably unsexy, and yet completely mandatory. There’s no escape from handling the exceptions that may (and therefore will) occur in your application, unless you want it to crash or misbehave regularly.

Proper exception handling can be a chore anywhere in your application, but in particular at integration points. Whenever your application needs to communicate with some external service, any number of things can go wrong, at various levels in the stack – from application level errors to network problems. We need to be able to cope with all these different kinds of errors, and hence we surround each service call with a veritable flood of exception handlers. That sort of works, but there are problems – first, the actual service is drowning amidst all the exception handling, and second, we get a lot of code duplication, since we tend to handle many kinds of exceptions in the same way for all service calls.

I’ve been discussing how to best solve this problem in a completely general and flexible way in several impromptu popsicle- and coffee-driven sessions with some of my compatriots at work (in particular Johan and Jonas – thanks guys!). Some of the code examples in this blog post, especially those who exhibit some trace of intelligence, can legitimately be considered rip-offs, evolutions, misunderstandings or various other degenerations of similar code snippets they have come up with.

But I’m getting ahead of myself. Let’s return to the root of the problem: how exception handling code has a tendency to overshadow the so-called business logic in our applications and to cause severe cases of duplication.

The problem is minor if there is a single exception we’re catching:

However, things regress quickly with additional types of exceptions:

This is a bit messy, and the signal-to-noise-ratio is low. It gets much worse when you have a second piece of interesting code that you need to guard with the same exception handling. Suddenly you have rampant code repetition on your hands.

A solution would be to use a closure to inject the interesting code into a generic exception handling method, like this:

And then you can use this in your methods:

Adding another method is trivial:

This works pretty nicely and solves our immediate issue. But there are a couple of limitations. In particular, three problems spring to mind:

  1. What if you want to return some legal, non-default value of type TR from one or more of the catch-blocks? As it stands now, each catch-block must either rethrow or return the default value of TR.
  2. What if there are variations in how you want to handle some of the exceptions? For instance, it may be that YetAnotherException should be handled differently for each method.
  3. What if there are slight variations between the “catching” needs for the various methods? What if you decided that SafeMethod2 doesn’t need to handle SomeOtherException, whereas SafeMethod3 should handle IdiosyncraticException in addition to the “standard” ones?
  4. As an answer to the two first problems, you could pass in each exception handler to the Call method! Then you would have a method like this:

    And at this point you’re about to stop reading this blog post, because WTF. Now your methods look like this:

    So we’re pretty much back at square one, except it’s a bit more convoluted and confusing for the casual reader. And we still don’t have a good solution for the third problem. We could of course fake “non-handling” of SomeOtherException for SafeMethod2 by simply handing in a non-handling handler, that is, one that simply rethrows directly: ex => { throw ex; }. But that’s ugly and what about the IdiosyncraticException? Well, it’s not going to be pretty:

    Which just might be the worst code ever, and also has the idiosyncracy that the additional catch-handler will only be reached if the Exception handler rethrows. Horrible. Better, perhaps, to put it inside?

    Well yes, slightly better, but still pretty horrible, and much worse than just suffering the duplication in the first place. But maybe we’ve learned something. What we need is composition – our current solution doesn’t compose at all. we need to be able to put together exactly the exception handlers we need for each method, while at the same time avoiding repetition. The problem is in some sense the coupling between the exception handlers. What if we tried a different approach, handling a single exception at a time?

    We could have a less ambitious Call-method, that would handle just a single type of exception for a method. Like this:

    Now we have a single generic exception handler h. Note that when we constrain the type variable TE to be a subclass of Exception, we can use TE in the catch clause, to select precisely the exceptions we would like to catch. Then we could write a method like this:

    What if we wanted to catch another exception as well? The solution is obvious:

    And yet another exception?

    You get the picture. Of course, we can collapse all three to a single method if we want to:

    What have we gained? Well, not readability, I’ll admit that. But we’ve gained flexibility! Flexibility goes a long way! And we’ll work on the readability shortly. First, though: just in case it’s not clear, what we’ve done is that we’ve created an exception handling scenario that is similar to this:

    So there’s nothing very complicated going on here. In fact, I bet you can see how similar the two methods really are – the structure is identical! All we’ve done is replace the familiar try-catch construct with our own Call-construct.

    As an aside, we should note that the composed try-catch approach has slightly different semantics than the sequential, coupled try-catch approach. The difference in semantics is due to decoupling provided by the composed try-catch approach – each catch-block is completely independent. Therefore, there is nothing stopping us from having multiple catch-handlers for the same type of exception should we so desire.

    Now, to work on the readability a bit. What we really would like is some way to attach catch-handlers for various exception types to our function call. So assuming that we wrap up our original function call in a closure using a delegate of type Func<TR>, we would like to be able to attach a catch-handler for some exception type TE, and end up with a new closure that still has the type Func<TR>. Then we would have encapsulated the exception handling completely. Our unambitious Call-method from above is almost what we need, but not quite. Instead, let’s define an extension method on the type that we would like to extend! Func<TR>, that is:

    So the trick is to return a new closure that encapsulates calling the original closure and the exception handling. Then we can write code like this:

    Now the neat thing is that you can very easily separate out the catch-handler-attachment from the rest of the code:

    So we have essentially created a fluent interface for attaching catch-handlers to a method call. The cool thing is that it is trivial to attach additional exception handlers as needed – and since we do so programmatically, we can even have logic to control the attachment of handlers. Say we discovered that we needed to catch WerewolfExceptions when the moon is full? No problem:

    In my eyes, this is pretty cool. You might be running away screaming, thinking I’m crazy and that with this approach, you’ll never know which exceptions you’re actually catching anymore. You could be right. Opinions differ.

    But that’s OK. All I’m doing is providing an alternative approach to the handling of multiple exception – one that I think offers increased power and flexibility. I’m not saying you should take advantage of it. With greater power comes greater responsibility and all that.

    And besides, we still haven’t talked about Linq. An alternative (and attractive) solution to our current fluent interface is to attach a sequence of catch-handlers at once. Something like this:

    However, it’s surprisingly difficult to provide a suitable type for that sequence of catch-handlers – in fact, the C# compiler fails to do so! The problem is that delegates are contravariant in their parameters, which means that a delegate D1 is considered a subtype of delegate D2 if the parameters of D1 are supertypes of the parameters of D2. That’s all a bit abstract, so perhaps an example will help:

    To make sense of the abstract description above, assume that D1 is Action<object> and D2 is Action<string>. Since the D1 parameter (object) is a supertype of the D2 parameter (string), it follows that D1 is a subtype of D2 – and not the other way around, as we might have guessed. This is why the C# compiler won’t let us assign a D2 instance to a D1 reference.

    The implication is that the C# compiler will fail to find a type that will reconcile the catch handlers above. In particular, due to the contravariance of delegate parameters, we cannot type the sequence as Func<Exception, TR>, since neither Func<NullReferenceException, TR>, nor Func<InvalidOperationException, TR>, nor Func<FormatException, TR> are assignable to Func<Exception, TR>. It would go the other way around: we could assign a Func<Exception, TR> to all three of the other types, but which one should the compiler pick? If it (arbitrarily) picked Func<NullReferenceException, TR>, clearly it wouldn’t work for the two other delegates – and all other choices have the same problem.

    So we’re stuck. Sort of. The only solution we have is to hide the exception type somehow, so that we don’t have to include the exception type in the type of the sequence. Now how do we do that? Well, in some sense, we’ve already seen an example of how to do that: we hide the exception handling (and the type) inside a closure. So all we need is some way to convert an exception handler to a simple transformation function that doesn’t care about the type of the exception itself. Like this:

    So what is this thing? It’s a method that encapsulates the catch-handler inside a closure. This closure will take as input a closure of type Func<TR> and produce as output another closure of type Func<TR>. In the process, we have hidden the type TE, so that the C# compiler doesn’t have to worry about it anymore: all we have is a thing that will transform a Func<TR> to another Func<TR>.

    So now we can sort of accomplish what we wanted, even though it’s less than perfect.

    But now we can have some fun using Linq’s Aggregate method to compose our exception handlers. So we might write code like this:

    The cool part is obviously the Aggregate call, where acc is the “accumulated” composed closure, nxt is the next encapsulated exception handler and thing is the thing we’re trying to protect with our exception handlers – so in other words, the closure that contains the call to FetchStringSomewhere.

    And of course we can now implement CatchAll if we want to:

    Now please, if you are Eric Lippert and can come up with code that proves that I’m wrong with respect to the typing of sequences of exception handler delegates – please let me know! I would very much like to be corrected if that is the case.