Aspects without Aspects

In the previous blog posts, we saw that we could hide the problematic concrete exception type from the C# compiler by tucking it inside a transformation from a closure of type Func<TR> to another closure of the same type. But of course we can use such transformations for many things besides exception handling. Any old behaviour that we would like to throw on top of the business logic, we can apply in layers using this approach.

This capability is so cool that I took a break from writing this blog post to share my enthusiasm with my wife. She was like, “what are you blogging about?”, and I was like “there’s this really cool thing you can do, where you apply this transformation to some method call, and then you can like, do additional stuff with it, entirely transparently!”, and she was like “like what?”, and I was like “like anything!”, and she was like “like what?”, and I was like “anything you want!”, but she was still like “like what though?” and then I turned more like “uh… uh… like say you had this method that returned a string – you could easily transform that into a method that looked exactly the same, but returned the reversed string instead”, and she was like “…the reversed string? why?” and I was like “or-or-or maybe you could return the uppercase string instead…?”, and she was like “uppercase?” with totally uppercase eyebrows and I was like “nonono! I got it! say you had this method that did a really expensive and slow computation, you could turn that into a method that would keep along the result that computation, so you didn’t have to do the actual computation all the time”, and she was like “oh, that’s cool” and I was like “phew! I’m never talking to her about blogging again!”.

So that was a close call. But yes, you can totally use this for caching. All we need is a suitable transformation thing.


public static Func<T> Caching<T>(this Func<T> f)
{
bool cached = false;
T t = default(T);
return () => {
if (cached) return t;
t = f();
cached = true;
return t;
};
}

view raw

Caching.cs

hosted with ❤ by GitHub

Here, we’re taking advantage of the fact that C# has mutable closures – that is, that we can write to cached and t from inside the body of the lambda expression.

To verify that it works, we need a suitable example – something that’s really expensive and slow to compute. And as we all know, one of the most computationally intensive things we can do in a code example is to sleep:


Func<string> q = () => {
Thread.Sleep(2000);
return "Hard-obtained string";
};
Console.WriteLine(q());
Console.WriteLine(q());
Console.WriteLine(q());
q = q.Caching();
Console.WriteLine(q());
Console.WriteLine(q());
Console.WriteLine(q());

view raw

SleepCache.cs

hosted with ❤ by GitHub

Well, what kind of behaviour should we expect from this code? Obviously, the first three q calls will be slow. But what about the three last? The three last execute the caching closure instead. When we execute the forth call, cached is false, and so the if test fails, and we proceed to evaluate the original, non-caching q (which is slow), tuck away the result value for later, set the cached flag to true, and return the computed result – the hard-obtained string. But the fifth and sixth calls should be quick, since cached is now true, and we have a cached result value to return to the caller, without ever having to resort to the original q.

That’s theory. Here’s practice:

So that seems to work according to plan. What else can we do? We’ve seen exception handling in the previous posts and caching in this one – both examples of so-called “cross-cutting concerns” in our applications. Cross-cutting concerns was hot terminology ten years ago, when the enterprise world discovered the power of the meta-object protocol (without realizing it, of course). It did so in the guise of aspect-oriented programming, which carried with it a whole vocabulary besides the term “cross-cutting concerns” itself, including “advice” (additional behaviour to handle), “join points” (places in your code where the additional behaviour may be applied) and “pointcuts” (a way of specifying declaratively which join points the advice applies to). And indeed, we can use these transformations that we’ve been doing to implement a sort of poor man’s aspects.

Why a poor man’s aspects? What’s cheap about them? Well, we will be applying advice at various join points, but we won’t be supporting pointcuts to select them. Rather, we will be applying advice to the join points manually. Arguably, therefore, it’s not really aspects at all, and yet we get some of the same capabilities. That’s why we’ll call them aspects without aspects. Makes sense?

Let’s consider wrapping some closure f in a hypothetical try-finally-block, and see where we might want to add behaviour.


// 1. Before calling f.
try {
f();
// 2. After successful call to f.
}
finally {
// 3. After any call to f.
}

So we’ll create extension methods to add behaviour in those three places. We’ll call them Before, Success and After, respectively.


public static class AspectExtensions {
public static Func<T> Before<T>(this Func<T> f, Action a) {
return () => { a(); return f(); };
}
public static Func<T> Success<T>(this Func<T> f, Action a) {
return () => {
var result = f();
a();
return result;
};
}
public static Func<T> Success<T>(this Func<T> f, Action<T> a) {
return () => {
var result = f();
a(result);
return result;
};
}
public static Func<T> After<T>(this Func<T> f, Action a) {
return () => {
try {
return f();
} finally {
a();
}
};
}
public static Func<T> After<T>(this Func<T> f, Action<T> a) {
return () => {
T result = default(T);
try {
result = f();
return result;
} finally {
a(result);
}
};
}
}

Note that we have two options for each of the join points that occur after the call to the original f closure. In some cases you might be interested in the value returned by f, in others you might not be.

How does it work in practice? Let’s look at a contrived example.


static void Main (string[] args)
{
Func<Func<string>, Func<string>> wrap = fn => fn
.Before(() => Console.WriteLine("I'm happening early on."))
.Success(r => Console.WriteLine("Successfully obtained: " + r))
.Before(() => Console.WriteLine("When do I occur???"))
.After(r => Console.WriteLine("What did I get? " + r));
var m1 = wrap(() => {
Console.WriteLine("Executing m1…");
return "Hello Kiczales!";
});
var m2 = wrap(() => {
Console.WriteLine("Executing m2…");
throw new Exception("Boom");
});
Call("m1", m1);
Call("m2", m2);
}
static void Call(string name, Func<string> m) {
Console.WriteLine(name);
try {
Console.WriteLine(name + " returned: " + m());
}
catch (Exception ex) {
Console.WriteLine("Exception in {0}: {1}", name, ex.Message);
}
Console.WriteLine();
}

So here we have a transformation thing that takes a Func<string> closure and returns another Func<string> closure, with several pieces of advice applied. Can you work out when the different closures will be executed?

We start with some closure fn, but before fn executes, the first Before must execute (that’s why we call it Before!). Assuming both of these execute successfully (without throwing an exception), the Success will execute. But before all these things, the second Before must execute! And finally, regardless of how the execution turns out with respect to exceptions, the After should execute.

In the case of m1, no exception occurs, so we should see the message “Successfully obtained: Hello Kiczales!” in between “Executing m1…” and “What did I get? Hello Kiczales!”. In the case of m2, on the other hand, we do get an exception, so the Success closure is never executed.

A screenshot of my console verifies this:

Screen Shot 2014-07-04 at 10.59.07 PM

So we’ve seen that we can do fluent exception handling, caching and aspects without aspects using the same basic idea: we take something of type Func<TR> and produce something else of the same type. Of course, this means that we’re free to mix and match all of these things if we wanted to, and compose them all using Linq’s Aggregate method! For once, though, I think I’ll leave that as an exercise for the reader.

And of course, we can transform other things besides closures as well – we can use the same approach to transform any instance of type T to some other T instance. In fact, let’s declare a delegate to capture such a generalized concept:


delegate T Decorate<T>(T t);

view raw

Decorate.cs

hosted with ❤ by GitHub

Why Decorate? Well, nothing is ever new on this blog. I’m just rediscovering old ideas and reinventing flat tires as Alan Kay put it. In this case, it turns out that all we’ve been doing is looking at the good old Decorator pattern from the GoF book in a new or unfamiliar guise.


Dry data

I’m a big proponent of the NoORM movement. Haven’t heard of it? That’s because it doesn’t exist. But it sort of does, under a different name. So-called “micro-ORMs” like Simple.Data, dapper and massive all belong to this category. That’s three really smart guys (unwittingly) supporting the same cause as I. Not bad. I’d say that’s the genesis of a movement right there.

Unsurprisingly, NoORM means “not only ORM”. The implication is that there are scenarios where full-blown object-relational mapper frameworks like nHibernate and Entity Framework are overkill. Such frameworks really go beyond merely addressing the infamous object/relational impedence mismatch (which is arguably irreconcilable), to support an almost seamless experience of persistent objects stored in a relational database. To do so, they pull out the heavy guns, like the Unit of Work pattern from Martin Fowler’s Patterns of Enterprise Application Architecture (One of those seminal tomes with an “on bookshelf”-to-“actually read it” ratio that’s just a little too high.)

And that’s great! I always say, let someone else “maintain a list of objects affected by a business transaction and coordinate the writing out of changes and the resolution of concurrency problems”. Preferably someone smarter than me. It’s hard to get right, and it in the right circumstances, being able to leverage a mature framework to do that heavy lifting for you is a huge boon.

Make sure you have some heavy lifting to do, though. All the power, all the functionality, all the goodness supported by state-of-the-art ORMs, comes at a cost. There’s a cost in configuration, in conceptual overhead, in overall complexity of your app. Potentially there’s a cost in performance as well. What if you don’t care about flexible ways of configuring the mapping of a complex hierarchy of rich domain objects onto a highly normalized table structure? What if you don’t need to support concurrent, persistent manipulation of the state of those objects? What if all you need is to grab some data and go to town? In that case, you might be better served with something simpler, like raw ADO.NET calls or some thin, unambitious veneer on top of that.

Now there’s one big problem with using raw ADO.NET calls: repetition (lack of DRYness). You basically have to go through the same song-and-dance every time, with just minor variations. With that comes boredom and bugs. So how do we avoid that? How do fight repetition and duplication? By abstraction, of course. We single out the stuff that varies and abstract away the rest. Needless to say, the less that varies, the simpler and more powerful the abstraction. If you can commit to some serious constraints with respect to varity, your abstraction becomes that much more succinct and that much more powerful. Of course, in order to go abstract, we first need to go concrete. So let’s do that.

Here’s a scenario: there’s a database. A big, honkin’ legacy database. It’s got a gazillion stored procedures, reams and reams of business logic written in T-SQL by some tech debt nomad who has since moved on to greener pastures. A dozen business critical applications rely on it. It’s not something you’ll want to touch with a ten-foot pole. The good thing is you don’t have to. For the scope of the project you’re doing, all you need to do is grab some data and go. Invoke a handful of those archaic stored procedures to get the data you need, and you’re done. Home free. Have a cup of tea.

Now what sort of constraints can we embrace and exploit in this scenario?

  1. Everything will be stored procedures.
  2. It’s SQL Server, and that’s not going to change.

As it turns out, the second point is not really significant, since we’ll need database-agnostic code if we’re going to write tests. The first one is interesting though. We’ll also assume that the stored procedures will accept input parameters only. That’s going to simplify our code a great deal.

Let’s start by introducing a naive client doing straight invocation of a few stored procedures in plain ol’ ADO.NET:


public class Client1
{
private readonly string _connStr;
private readonly DbProviderFactory _dpf;
public Client1(string connStr) : this(connStr,
DbProviderFactories.GetFactory("System.Data.SqlClient"))
{}
public Client1(string connStr, DbProviderFactory dpf)
{
_connStr = connStr;
_dpf = dpf;
}
private DbParameter CreateParameter(string name, object val)
{
var p = _dpf.CreateParameter();
p.ParameterName = name;
p.Value = val;
return p;
}
public IEnumerable<User> GetCompanyUsers(string company)
{
var result = new List<User>();
using (var conn = _dpf.CreateConnection())
using (var cmd = _dpf.CreateCommand())
{
conn.ConnectionString = _connStr;
conn.Open();
cmd.Connection = conn;
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = "spGetCompanyUsers";
var p = CreateParameter("@companyName", company);
cmd.Parameters.Add(p);
var reader = cmd.ExecuteReader();
while (reader.Read())
{
var u = new User
{
Id = (string) reader["id"],
UserName = (string) reader["user"],
Name = (string) reader["name"],
Email = (string) reader["emailAddress"],
Phone = (string) reader["cellPhone"],
ZipCode = (string) reader["zip"]
};
result.Add(u);
}
}
return result;
}
public string GetUserEmail(string userId)
{
using (var conn = _dpf.CreateConnection())
using (var cmd = _dpf.CreateCommand())
{
conn.ConnectionString = _connStr;
conn.Open();
cmd.Connection = conn;
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = "spGetEmailForUser";
var p = CreateParameter("@userId", userId);
cmd.Parameters.Add(p);
return (string) cmd.ExecuteScalar();
}
}
public void StoreUser(User u)
{
using (var conn = _dpf.CreateConnection())
using (var cmd = _dpf.CreateCommand())
{
conn.ConnectionString = _connStr;
conn.Open();
cmd.Connection = conn;
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = "spInsertOrUpdateUser";
var ps = new [] {
CreateParameter("@userId", u.Id),
CreateParameter("@user", u.UserName),
CreateParameter("@name", u.Name),
CreateParameter("@emailAddress", u.Email),
CreateParameter("@cellPhone", u.Phone),
CreateParameter("@zip", u.ZipCode)
};
cmd.Parameters.AddRange(ps);
cmd.CommandType = CommandType.StoredProcedure;
cmd.ExecuteNonQuery();
}
}
}

view raw

Client1.cs

hosted with ❤ by GitHub

So you can see, there’s a great deal of duplication going on there. And obviously, as you add new queries and commands, the amount of duplication increases linearly. It’s the embryo of a maintenance nightmare right there. But we’ll fight back with that trusty ol’ weapon of ours: abstraction! To arrive at a suitable one, let’s play a game of compare and contrast.

What varies?

  • The list of input parameters.
  • In the case of queries: the data row we’re mapping from and the .NET type we’re mapping to.
  • The names of stored procedures.
  • The execute method (ExecuteReader, ExecuteScalar, ExecuteNonQuery). We’re gonna ignore DataSets since I don’t like them. (I’ll be using my own anemic POCOs, thank you very much!).

What stays the same?

  • The connection string.
  • The need to create and open a connection.
  • The need to create and configure a command object.
  • The need to execute the command against the database.
  • The need to map the result of the command to some suitable representation (unless we’re doing ExecuteNonQuery).

There are a couple of design patterns that spring to mind, like Strategy or Template method, that might help us clean things up. We’ll be leaving GoF on the shelf next to PoEAA, though, and use lambdas and generic methods instead.

I take “don’t repeat yourself” quite literally. So we’re aiming for a single method where we’ll be doing all our communication with the database. We’re going to channel all our queries and commands through that same method, passing in just the stuff that varies.

To work towards that goal, let’s refactor into some generic methods:


public class Client2
{
private readonly string _connStr;
private readonly DbProviderFactory _dpf;
public Client2(string connStr) : this(connStr,
DbProviderFactories.GetFactory("System.Data.SqlClient"))
{}
public Client2(string connStr, DbProviderFactory dpf)
{
_connStr = connStr;
_dpf = dpf;
}
private DbParameter CreateParameter(string name, object val)
{
var p = _dpf.CreateParameter();
p.ParameterName = name;
p.Value = val;
return p;
}
public IEnumerable<User> GetCompanyUsers(string company)
{
return ExecuteReader("spGetCompanyUsers",
new[] {CreateParameter("@companyName", company)},
r => new User
{
Id = (string) r["id"],
UserName = (string) r["user"],
Name = (string) r["name"],
Email = (string) r["emailAddress"],
Phone = (string) r["cellPhone"],
ZipCode = (string) r["zip"]
});
}
public IEnumerable<T> ExecuteReader<T>(string spName,
DbParameter[] sqlParams, Func<IDataRecord, T> map)
{
var result = new List<T>();
using (var conn = _dpf.CreateConnection())
using (var cmd = _dpf.CreateCommand())
{
conn.ConnectionString = _connStr;
conn.Open();
cmd.Connection = conn;
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = spName;
cmd.Parameters.AddRange(sqlParams);
var reader = cmd.ExecuteReader();
while (reader.Read())
{
result.Add(map(reader));
}
}
return result;
}
public string GetUserEmail(string userId)
{
return ExecuteScalar("spGetEmailForUser",
new[] {CreateParameter("@userId", userId)},
o => o as string);
}
public T ExecuteScalar<T>(string spName,
DbParameter[] sqlParams, Func<object, T> map)
{
using (var conn = _dpf.CreateConnection())
using (var cmd = _dpf.CreateCommand())
{
conn.ConnectionString = _connStr;
conn.Open();
cmd.Connection = conn;
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = spName;
cmd.Parameters.AddRange(sqlParams);
return map(cmd.ExecuteScalar());
}
}
public void StoreUser(User u)
{
ExecuteNonQuery("spInsertOrUpdateUser",
new[]
{
CreateParameter("@userId", u.Id),
CreateParameter("@user", u.UserName),
CreateParameter("@name", u.Name),
CreateParameter("@emailAddress", u.Email),
CreateParameter("@cellPhone", u.Phone),
CreateParameter("@zip", u.ZipCode)
});
}
public void ExecuteNonQuery(string spName,
DbParameter[] sqlParams)
{
using (var conn = _dpf.CreateConnection())
using (var cmd = _dpf.CreateCommand())
{
conn.ConnectionString = _connStr;
conn.Open();
cmd.Connection = conn;
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = spName;
cmd.Parameters.AddRange(sqlParams);
cmd.CommandType = CommandType.StoredProcedure;
cmd.ExecuteNonQuery();
}
}
}

view raw

Client2.cs

hosted with ❤ by GitHub

So we’ve bloated the code a little bit – in fact, we just doubled the number of methods. But we’re in a much better position to write new queries and commands. We’re done with connections and usings and what have you. Later on, we can just reuse the same generic methods.

However, we still have some glaring duplication hurting our eyes: the three execute methods are practically identical. So while the code is much DRYer than the original, there’s still some moisture in there. And moisture leads to smell and rot.

To wring those few remaining drops out of the code, we need to abstract over the execute methods. The solution? To go even more generic!


public TResult Execute<T, TResult>(string spName,
DbParameter[] sqlParams, Func<IDbCommand, T> execute,
Func<T, TResult> map)
{
using (var conn = _dpf.CreateConnection())
using (var cmd = _dpf.CreateCommand())
{
conn.ConnectionString = _connStr;
conn.Open();
cmd.Connection = conn;
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = spName;
cmd.Parameters.AddRange(sqlParams);
cmd.CommandType = CommandType.StoredProcedure;
return map(execute(cmd));
}
}

view raw

Execute.cs

hosted with ❤ by GitHub

So basically the solution is to pass in a function that specifies the execute method to run. The other execute methods can use this to get
their stuff done. Now that we have our single, magical do-all database interaction method, let’s make make things a bit more reusable. We’ll cut the database code out of the client, and introduce a tiny abstraction. Let’s call it Database, since that’s what it is. In fact, for good measure, let’s throw in a new method that might be useful in the process: ExecuteRow. Here’s the code:


public class Database
{
private readonly string _connStr;
private readonly DbProviderFactory _dpf;
public Database(string connStr): this(connStr,
DbProviderFactories.GetFactory("System.Data.SqlClient"))
{}
public Database(string connStr, DbProviderFactory dpf)
{
_connStr = connStr;
_dpf = dpf;
}
public IEnumerable<T> ExecuteReader<T>(string spName,
DbParameter[] sqlParams, Func<IDataRecord, T> map)
{
return Execute(spName, sqlParams, cmd => cmd.ExecuteReader(),
r =>
{
var result = new List<T>();
while (r.Read())
{
result.Add(map(r));
}
return result;
});
}
public T ExecuteRow<T>(string spName,
DbParameter[] sqlParams, Func<IDataRecord, T> map)
{
return ExecuteReader(spName, sqlParams, map).First();
}
public T ExecuteScalar<T>(string spName,
DbParameter[] sqlParams, Func<object, T> map)
{
return Execute(spName, sqlParams,
cmd => cmd.ExecuteScalar(), map);
}
public void ExecuteNonQuery(string spName,
DbParameter[] sqlParams)
{
Execute(spName, sqlParams,
cmd => cmd.ExecuteNonQuery(),
o => o);
}
public TResult Execute<T, TResult>(string spName,
DbParameter[] sqlParams, Func<IDbCommand, T> execute,
Func<T, TResult> map)
{
using (var conn = _dpf.CreateConnection())
using (var cmd = _dpf.CreateCommand())
{
conn.ConnectionString = _connStr;
conn.Open();
cmd.Connection = conn;
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = spName;
cmd.Parameters.AddRange(sqlParams);
cmd.CommandType = CommandType.StoredProcedure;
return map(execute(cmd));
}
}
}

view raw

Database.cs

hosted with ❤ by GitHub

ExecuteScalar is pretty straightforward, but there are a few interesting details concerning the others. First, ExecuteReader derives a map from IDataReader to IEnumerable from the user-supplied map from IDataRecord to T. Second, ExecuteNonQuery doesn’t really care about the result from calling DbCommand.ExecuteNonQuery against the database (which indicates the number of rows affected by the command/non-query). So we’re providing the simplest possible map – the identity map – to the Execute method.

So the execution code is pretty DRY now. Basically, you’re just passing in the stuff that varies. And there’s a single method actually creating connections and commands and executing them against the database. Good stuff.

Let’s attack redundancy in the client code. Here’s what it looks like at the moment:


public class Client4
{
private readonly Database _db;
public Client4(Database db)
{
_db = db;
}
public IEnumerable<User> GetCompanyUsers(string company)
{
return _db.ExecuteReader("spGetCompanyUsers",
new[] { new SqlParameter("@companyName", company) },
r => new User
{
Id = (string)r["id"],
UserName = (string)r["user"],
Name = (string)r["name"],
Email = (string)r["emailAddress"],
Phone = (string)r["cellPhone"],
ZipCode = (string)r["zip"]
});
}
public string GetUserEmail(string userId)
{
return _db.ExecuteScalar("spGetEmailForUser",
new[] { new SqlParameter("@userId", userId) },
o => o as string);
}
public void StoreUser(User u)
{
_db.ExecuteNonQuery("spInsertOrUpdateUser",
new[]
{
new SqlParameter("@userId", u.Id),
new SqlParameter("@user", u.UserName),
new SqlParameter("@name", u.Name),
new SqlParameter("@emailAddress", u.Email),
new SqlParameter("@cellPhone", u.Phone),
new SqlParameter("@zip", u.ZipCode)
});
}
}

view raw

Client4.cs

hosted with ❤ by GitHub

Actually, it’s not too bad, but I’m not happy about the repeated chanting of new SqlParameter. We’ll introduce a simple abstraction to DRY up that too, and give us a syntax that’s a bit more succinct and declarative-looking.


public class StoredProcedure
{
private readonly DbProviderFactory _dpf;
private readonly DbCommand _sp;
public StoredProcedure(DbCommand sp, DbProviderFactory dpf)
{
_sp = sp;
_dpf = dpf;
}
public StoredProcedure this[string parameterName,
object value, int? size = null, DbType? type = null]
{
get { return AddParameter(parameterName, value, size, type); }
}
public StoredProcedure AddParameter(string parameterName,
object value, int? size = null, DbType? type = null)
{
var p = _dpf.CreateParameter();
if (p != null)
{
p.ParameterName = parameterName;
p.Value = value;
if (size.HasValue)
{
p.Size = size.Value;
}
if (type.HasValue)
{
p.DbType = type.Value;
}
_sp.Parameters.Add(p);
}
return this;
}
}

This is basically a sponge for parameters. It uses a little trick with a get-indexer with side-effects to do its thing. This allows for a simple fluent syntax to add parameters to a DbCommand object. Let’s refactor the generic Execute method to use it.


public TResult Execute<T, TResult>(string spName,
Func<StoredProcedure, StoredProcedure> configure,
Func<IDbCommand, T> execute,
Func<T, TResult> map)
{
using (var conn = _dpf.CreateConnection())
using (var cmd = _dpf.CreateCommand())
{
conn.ConnectionString = _connStr;
conn.Open();
cmd.Connection = conn;
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = spName;
configure(new StoredProcedure(cmd, _dpf));
cmd.CommandType = CommandType.StoredProcedure;
return map(execute(cmd));
}
}

The refactoring ripples through to the other execute methods as well, meaning you pass in a Func instead of the parameter array. Now the interesting part is how the new abstraction affects the client code. Here’s how:


public class Client5
{
private readonly Database _db;
public Client5(Database db)
{
_db = db;
}
public IEnumerable<User> GetCompanyUsers(string company)
{
return _db.ExecuteReader(
"spGetCompanyUsers",
sp => sp["@companyName", company],
r => new User
{
Id = (string) r["id"],
UserName = (string) r["user"],
Name = (string) r["name"],
Email = (string) r["emailAddress"],
Phone = (string) r["cellPhone"],
ZipCode = (string) r["zip"]
});
}
public string GetUserEmail(string userId)
{
return _db.ExecuteScalar(
"spGetEmailForUser",
sp => sp["@userId", userId],
o => o as string);
}
public void StoreUser(User u)
{
_db.ExecuteNonQuery(
"spInsertOrUpdateUser",
sp => sp["@userId", u.Id]
["@user", u.UserName]
["@name", u.Name]
["@emailAddress", u.Email]
["@cellPhone", u.Phone]
["@zip", u.ZipCode]);
}
}

view raw

Client5.cs

hosted with ❤ by GitHub

Which is pretty much as DRY as it gets, at least in my book. We just grab the data and go. Wheee! Where’s my tea?