The Don’t-Repeat-Yourself (DRY) design principle in .NET Part 3

We’ll finish up the DRY series with the Repeated Execution Pattern. This pattern can be used when you see similar chunks of code repeated at several places. Here we talk about code bits that are not 100% identical but follow the same pattern and can clearly be factored out.

Here’s an example:

static void Main(string[] args)
{
	Console.WriteLine("About to run the DoSomething method");
	DoSomething();
	Console.WriteLine("Finished running the DoSomething method");
	Console.WriteLine("About to run the DoSomethingAgain method");
	DoSomethingAgain();
	Console.WriteLine("Finished running the DoSomethingAgain method");
	Console.WriteLine("About to run the DoSomethingMore method");
	DoSomethingMore();
	Console.WriteLine("Finished running the DoSomethingMore method");
	Console.WriteLine("About to run the DoSomethingExtraordinary method");
	DoSomethingExtraordinary();
	Console.WriteLine("Finished running the DoSomethingExtraordinary method");
	
	Console.ReadLine();
}

private static void DoSomething()
{
	WriteToConsole("Nils", "a good friend", 30);
}

private static void DoSomethingAgain()
{
	WriteToConsole("Christian", "a neighbour", 54);
}

private static void DoSomethingMore()
{
	WriteToConsole("Eva", "my daughter", 4);
}

private static void DoSomethingExtraordinary()
{
	WriteToConsole("Lilly", "my daughter's best friend", 4);
}

private static void WriteToConsole(string name, string description, int age)
{
	Console.WriteLine(format, name, description, address, age);
}

We’re simulating a simple logging function every time we run we run one of these “dosomething” methods. The pattern is clear: write a message to the console, carry out an action and write another message to the console. The actions have an identical void, parameterless signature. The logging message all have the same format, it’s only the method name that varies. If this chain of actions continues to grow then we have to come back here and add the same type of logging messages. Also, if you later wish to change the logging message format then you’ll have to do it in many different places.

The first step is to factor out a single console-action-console chunk to its own method:

private static void ExecuteStep()
{
	Console.WriteLine("About to run the DoSomething method");
	DoSomething();
	Console.WriteLine("Finished running the DoSomething method");
}

This is of course not good enough as the method is very rigid. It is hard coded to execute the first step only. We can vary the action to be executed using the Action object:

private static void ExecuteStep(Action action)
{
	Console.WriteLine("About to run the DoSomething method");
	action();
	Console.WriteLine("Finished running the DoSomething method");
}

We can call this method as follows:

static void Main(string[] args)
{
	ExecuteStep(DoSomething);
	ExecuteStep(DoSomethingAgain);
	ExecuteStep(DoSomethingExtraordinary);
	ExecuteStep(DoSomethingMore);
	Console.ReadLine();
}

Except that we’re not logging the method names correctly. That’s still hard coded to “DoSomething”. That’s easy to fix as the Action object has public properties to read off the method name:

private static void ExecuteStep(Action action)
{
	string methodName = action.Method.Name;
	Console.WriteLine("About to run the {0} method", methodName);
	action();
	Console.WriteLine("Finished running the {0} method", methodName);
}

We’re almost done. If you look at the Main method then the ExecuteStep(somemethod) is called 4 times. That is also a form of DRY-violation. Imagine that you have a long workflow, such as the steps in a chemical experiment. In that case you may need to repeat the call to ExecuteStep many times.

We can instead put the methods to be executed in a collection of actions:

private static IEnumerable<Action> GetExecutionSteps()
{
	return new List<Action>()
	{
		DoSomething
		, DoSomethingAgain
		, DoSomethingExtraordinary
		, DoSomethingMore
	};
}

You can use this from within Main as follows:

static void Main(string[] args)
{
	IEnumerable<Action> actions = GetExecutionSteps();
	foreach (Action action in actions)
	{
		ExecuteStep(action);
	}
	Console.ReadLine();
}

Now it’s not the responsibility of the Main method to define the steps to be executed. It only iterates through a loop and calls ExecuteStep for each action.

View the list of posts on Architecture and Patterns here.

The Don’t-Repeat-Yourself (DRY) design principle in .NET Part 2

We’ll continue with our discussion of the Don’t-Repeat-Yourself principle where we left off in the previous post. The next issue we’ll consider is repetition of logic.

Repeated logic

Consider that you have the following two domain objects:

public class Product 
{
	public long Id { get; set; }
	public string Description { get; set; }
}
public class Order
{
	public long Id { get; set; }
}

Let’s say that the IDs are not automatically assigned when inserting a new row in the database. Instead, it must be calculated. So you come up with the following function to construct an ID which is probably unique:

private long CalculateId()
{
	TimeSpan ts = DateTime.UtcNow - (new DateTime(1970, 1, 1, 0, 0, 0));
	long id = Convert.ToInt64(ts.TotalMilliseconds);
	return id;
}

You might include this type of logic in both domain objects:

public class Product 
{
	public long Id { get; set; }
	public string Description { get; set; }

	public Product()
	{
		Id = CalculateId();
	}

	private long CalculateId()
	{
		TimeSpan ts = DateTime.UtcNow - (new DateTime(1970, 1, 1, 0, 0, 0));
		long id = Convert.ToInt64(ts.TotalMilliseconds);
		return id;
	}
}
public class Order
{
	public long Id { get; set; }

	public Order()
	{
		Id = CalculateId();
	}

	private long CalculateId()
	{
		TimeSpan ts = DateTime.UtcNow - (new DateTime(1970, 1, 1, 0, 0, 0));
		long id = Convert.ToInt64(ts.TotalMilliseconds);
		return id;
	}
}

This situation may arise if the two domain objects have been added to your application with a long time delay and you’ve forgotten about the ID generation solution. Also, if you want to keep the ID generation logic independent for each object, then you might continue with this solution thinking that some day the ID generation strategies may be different. However, at some point the rules change and all IDs of type long must be constructed using the CalculateId method. Then you absolutely want to have this logic in one place only. Otherwise if the rule changes, then you probably don’t want to make the same change for every single domain object, right?

Probably a very common solution would be to factor out this logic to a static method:

public class IdHelper
{
	public static long CalculateId()
	{
		TimeSpan ts = DateTime.UtcNow - (new DateTime(1970, 1, 1, 0, 0, 0));
		long id = Convert.ToInt64(ts.TotalMilliseconds);
		return id;
	}
}

The updated objects look as follows:

public class Order
{
	public long Id { get; set; }

	public Order()
	{
		Id = IdHelper.CalculateId();
	}
}
public class Product 
{
	public long Id { get; set; }
	public string Description { get; set; }

	public Product()
	{
		Id = IdHelper.CalculateId();
	}
}

If you’ve followed through the discussion on the SOLID design principles then you’ll know by now that static methods can be a design smell that indicate tight coupling. In this case there’s a hard dependency of the Product and Order classes on IdHelper.

If all objects in your domain must have an ID of type long then you may let every object derive from a superclass such as this:

public abstract class EntityBase
{
	public long Id { get; private set; }

	public EntityBase()
	{
		Id = CalculateId();
	}

	private long CalculateId()
	{
		TimeSpan ts = DateTime.UtcNow - (new DateTime(1970, 1, 1, 0, 0, 0));
		long id = Convert.ToInt64(ts.TotalMilliseconds);
		return id;
	}
}

The Product and Order objects will derive from this class:

public class Product : EntityBase
{
	public string Description { get; set; }

	public Product()
	{}
}
public class Order : EntityBase
{
	public Order()
	{}
}

Then if you construct a new Order or Product class elsewhere then the ID will be assigned by the EntityBase constructor automatically.

In case you don’t like the base class approach then Constructor injection is another approach that can work. We delegate the ID generation logic to an external class which we hide behind an interface:

public interface IIdGenerator
{
	long CalculateId();
}

We have the following implementing class:

public class DefaultIdGenerator : IIdGenerator
{
	public long CalculateId()
	{
		TimeSpan ts = DateTime.UtcNow - (new DateTime(1970, 1, 1, 0, 0, 0));
		long id = Convert.ToInt64(ts.TotalMilliseconds);
		return id;
	}
}

You can inject the interface dependency into the Order object as follows:

public class Order
{
	private readonly IIdGenerator _idGenerator;
	public long Id { get; private set; }

	public Order(IIdGenerator idGenerator)
	{
		if (idGenerator == null) throw new ArgumentNullException();
		_idGenerator = idGenerator;
		Id = _idGenerator.CalculateId();
	}
}

You can apply the same method to the Product object. Of course you can mix the above two solutions with the following EntityBase superclass:

public abstract class EntityBase
{
	private readonly IIdGenerator _idGenerator;

	public long Id { get; private set; }

	public EntityBase(IIdGenerator idGenerator)
	{
		if (idGenerator == null) throw new ArgumentNullException();
		_idGenerator = idGenerator;
		Id = _idGenerator.CalculateId();
	}
}

These are some of the possible solutions that you can employ to factor out common logic so that it becomes available for different objects. Obviously if this logic occurs only within the same class then just simply create a private method for it:

private void DoRepeatedLogic()
{
	Order order = new Order();
	TimeSpan ts = DateTime.UtcNow - (new DateTime(1970, 1, 1, 0, 0, 0));
	long orderId = Convert.ToInt64(ts.TotalMilliseconds);
	order.Id = orderId;

	Product product = new Product();
	ts = DateTime.UtcNow - (new DateTime(1970, 1, 1, 0, 0, 0));
	long productId = Convert.ToInt64(ts.TotalMilliseconds);
	product.Id = productId;
}

This is of course not very clever and you can quickly make it better:

private void DoRepeatedLogic()
{
	Order order = new Order();
	order.Id = CalculateId();

	Product product = new Product();
	product.Id = CalculateId();
}

private long CalculateId()
{
	TimeSpan ts = DateTime.UtcNow - (new DateTime(1970, 1, 1, 0, 0, 0));
	long id = Convert.ToInt64(ts.TotalMilliseconds);
	return id;
}

This is more likely to occur in long classes and methods where you lose track of all the code you’ve written. At some point you realise that some logic is repeated over and over again but it’s rooted deeply nested in a long, complicated method.

If statements

If statements are very important building blocks of an application. It would probably be impossible to write any real life app without them. However, it does not mean they should be used without any limitation. Consider the following domains:

public abstract class Shape
{
}

public class Triangle : Shape
{
	public int Base { get; set; }
	public int Height { get; set; }
}

public class Rectangle : Shape
{
	public int Width { get; set; }
	public int Height { get; set; }
}

Then in Program.cs of a Console app we can simulate a database lookup as follows:

private static IEnumerable&lt;Shape&gt; GetAllShapes()
{
	List&lt;Shape&gt; shapes = new List&lt;Shape&gt;();
	shapes.Add(new Triangle() { Base = 5, Height = 3 });
	shapes.Add(new Rectangle() { Height = 6, Width = 4 });
	shapes.Add(new Triangle() { Base = 9, Height = 5 });
	shapes.Add(new Rectangle() { Height = 3, Width = 2 });
	return shapes;
}

Say you want to calculate the total area of the shapes in the collection. The first approach may look like this:

private static double CalculateTotalArea(IEnumerable&lt;Shape&gt; shapes)
{
	double area = 0.0;
	foreach (Shape shape in shapes)
	{
		if (shape is Triangle)
		{
			Triangle triangle = shape as Triangle;
			area += (triangle.Base * triangle.Height) / 2;
		}
		else if (shape is Rectangle)
		{
			Rectangle recangle = shape as Rectangle;
			area += recangle.Height * recangle.Width;
		}
	}
	return area;
}

This is actually quite a common approach in a software design where our domain objects are mere collections of properties and are void of any self-contained logic. Look at the Triangle and Rectangle classes, they contain no logic whatsoever, they only have properties. They are reduced to the role of data-transfer-objects (DTOs). If you don’t understand at first what’s wrong with the above solution then I suggest you go through the Liskov Substitution Principle here. I won’t repeat what’s written in that post.

This post is about DRY so you may ask what this method has to do with DRY at all as we do not seem to repeat anything. Yes we do, although indirectly. Our initial intention was to create a class hierarchy so that we can work with the abstract class Shape elsewhere. Well, guess what, we’ve failed miserably. In this method we need to reveal not only the concrete implementation types of Shape but we’re forcing an external class to know about the internals of those concrete types.

This is a typical example for how not to use if statements in software. In the posts on the SOLID design principles we mentioned the Tell-Don’t-Ask (TDA) principle. It basically states that you should not ask an object questions about its current state before you ask it to perform something. Well, this piece of code is a clear violation of TDA although the lack of logic in the Triangle and Rectangle classes forced us to ask these questions.

The solution – or at least one of the viable solutions – will be to hide this calculation logic behind each concrete Shape class:

public abstract class Shape
{
	public abstract double CalculateArea();
}

public class Triangle : Shape
{
	public int Base { get; set; }
	public int Height { get; set; }

	public override double CalculateArea()
	{
		return (Base * Height) / 2;
	}
}

public class Rectangle : Shape
{
	public int Width { get; set; }
	public int Height { get; set; }

	public override double CalculateArea()
	{
		return Width * Height;
	}
}

The updated total area calculation looks as follows:

private static double CalculateTotalArea(IEnumerable&lt;Shape&gt; shapes)
{
	double area = 0.0;
	foreach (Shape shape in shapes)
	{
		area += shape.CalculateArea();
	}
	return area;
}

We’ve got rid of the if statements, we don’t violate TDA and the logic to calculate the area is hidden behind each concrete type. This allows us even to follow the above mentioned Liskov Substitution Principle.

View the list of posts on Architecture and Patterns here.

The Don’t-Repeat-Yourself (DRY) design principle in .NET Part 1

Introduction

The idea behind the Don’t-Repeat-Yourself (DRY) design principle is an easy one: a piece of logic should only be represented once in an application. In other words avoiding the repetition of any part of a system is a desirable trait. Code that is common to at least two different parts of your system should be factored out into a single location so that both parts call upon in. In plain English all this means that you should stop doing copy+paste right away in your software. Your motto should be the following:

Repetition is the root of all software evil.

Repetition does not only refer to writing the same piece of logic twice in two different places. It also refers to repetition in your processes – testing, debugging, deployment etc. Repetition in logic is often solved by abstractions or some common service classes whereas repetition in your process is tackled by automation. A lot of tedious processes can be automated by concepts from Continuous Integration and related automation software such as TeamCity. Unit testing can be automated by testing tools such as nUnit. You can read more on Test Driven Development and unit testing here.

In this ahort series on DRY I’ll concentrate on the ‘logic’ side of DRY. DRY is known by other names as well: Once and Only Once, and Duplication is Evil (DIE).

Examples

Magic strings

These are hard-coded strings that pop up at different places throughout your code: connection strings, formats, constants, like in the following code example:

class Program
{
	static void Main(string[] args)
	{
		DoSomething();
		DoSomethingAgain();
		DoSomethingMore();
		DoSomethingExtraordinary();
		Console.ReadLine();
	}

	private static void DoSomething()
	{
		string address = "Stockholm, Sweden";
		string format = "{0} is {1}, lives in {2}, age {3}";
		Console.WriteLine(format, "Nils", "a good friend", address, 30);
	}

	private static void DoSomethingAgain()
	{
		string address = "Stockholm, Sweden";
		string format = "{0} is {1}, lives in {2}, age {3}";
		Console.WriteLine(format, "Christian", "a neighbour", address, 54);
	}

	private static void DoSomethingMore()
	{
		string address = "Stockholm, Sweden";
		string format = "{0} is {1}, lives in {2}, age {3}";
		Console.WriteLine(format, "Eva", "my daughter", address, 4);
	}

	private static void DoSomethingExtraordinary()
	{
		string address = "Stockholm, Sweden";
		string format = "{0} is {1}, lives in {2}, age {3}";
		Console.WriteLine(format, "Lilly", "my daughter's best friend", address, 4);
	}
}

This is obviously a very simplistic example but imagine that the methods are located in different sections or even different modules in your application. In case you want to change the address you’ll need to find every hard-coded instance of the address. Likewise if you want to change the format you’ll need to update it in several different places. We can put these values into a separate location, such as Constants.cs:

public class Constants
{
	public static readonly string Address = "Stockholm, Sweden";
	public static readonly string StandardFormat = "{0} is {1}, lives in {2}, age {3}";
}

If you have a database connection string then that can be put into the configuration file app.config or web.config.

The updated programme looks as follows:

class Program
{
	static void Main(string[] args)
	{
		DoSomething();
		DoSomethingAgain();
		DoSomethingMore();
		DoSomethingExtraordinary();
		Console.ReadLine();
	}

	private static void DoSomething()
	{
		string address = Constants.Address;
		string format = Constants.StandardFormat;
		Console.WriteLine(format, "Nils", "a good friend", address, 30);
	}

	private static void DoSomethingAgain()
	{
		string address = Constants.Address;
		string format = Constants.StandardFormat;
		Console.WriteLine(format, "Christian", "a neighbour", address, 54);
	}

	private static void DoSomethingMore()
	{
		string address = Constants.Address;
		string format = Constants.StandardFormat;
		Console.WriteLine(format, "Eva", "my daughter", address, 4);
	}

	private static void DoSomethingExtraordinary()
	{
		string address = Constants.Address;
		string format = Constants.StandardFormat;
		Console.WriteLine(format, "Lilly", "my daughter's best friend", address, 4);
	}
}

This is a step to the right direction. If we change the constants in Constants.cs then the change will be propagated through the application. However, we still repeat the following bit over and over again:

string address = Constants.Address;
string format = Constants.StandardFormat;

The VALUES of the constants are now stored in one place, but what if we change the location of our constants to a different file? Or decide to read them from a file or a database? Then again we’ll need to revisit all these locations. We can move those variables to the class level and use them in our code as follows:

class Program
	{
		private static string address = Constants.Address;
		private static string format = Constants.StandardFormat;

		static void Main(string[] args)
		{
			DoSomething();
			DoSomethingAgain();
			DoSomethingMore();
			DoSomethingExtraordinary();
			Console.ReadLine();
		}

		private static void DoSomething()
		{
			Console.WriteLine(format, "Nils", "a good friend", address, 30);
		}

		private static void DoSomethingAgain()
		{
			Console.WriteLine(format, "Christian", "a neighbour", address, 54);
		}

		private static void DoSomethingMore()
		{
			Console.WriteLine(format, "Eva", "my daughter", address, 4);
		}

		private static void DoSomethingExtraordinary()
		{
			Console.WriteLine(format, "Lilly", "my daughter's best friend", address, 4);
		}
	}

We’ve got rid of the magic string repetition, but we can do better. Notice that each method performs basically the same thing: write to the console. This is an example of duplicate logic. The data written to the console is very similar in each case, we can factor it out to another method:

private static void WriteToConsole(string name, string description, int age)
{
	Console.WriteLine(format, name, description, address, age);
}

The updated Program class looks as follows:

class Program
	{
		private static string address = Constants.Address;
		private static string format = Constants.StandardFormat;

		static void Main(string[] args)
		{
			DoSomething();
			DoSomethingAgain();
			DoSomethingMore();
			DoSomethingExtraordinary();
			Console.ReadLine();
		}

		private static void DoSomething()
		{
			WriteToConsole("Nils", "a good friend", 30);
		}

		private static void DoSomethingAgain()
		{
			WriteToConsole("Christian", "a neighbour", 54);
		}

		private static void DoSomethingMore()
		{
			WriteToConsole("Eva", "my daughter", 4);
		}

		private static void DoSomethingExtraordinary()
		{
			WriteToConsole("Lilly", "my daughter's best friend", 4);
		}

		private static void WriteToConsole(string name, string description, int age)
		{
			Console.WriteLine(format, name, description, address, age);
		}
	}

Magic numbers

It’s not only magic strings that can cause trouble but magic numbers as well. Imagine that you have the following class in your application:

public class Employee
{
	public string Name { get; set; }
	public int Age { get; set; }
	public string Department { get; set; }
}

We’ll imitate a database lookup as follows:

private static IEnumerable<Employee> GetEmployees()
{
	return new List<Employee>()
	{
		new Employee(){Age = 30, Department="IT", Name="John"}
		, new Employee(){Age = 34, Department="Marketing", Name="Jane"}
		, new Employee(){Age = 28, Department="Security", Name="Karen"}
		, new Employee(){Age = 40, Department="Management", Name="Dave"}
	};
}

Notice the usage of the index 1 in the following method:

private static void DoMagicInteger()
{
	List<Employee> employees = GetEmployees().ToList();
	if (employees.Count > 0)
	{
		Console.WriteLine(string.Concat("Age: ", employees[1].Age, ", department: ", employees[1].Department
			, ", name: ", employees[1].Name));
	}
}

So we only want to output the properties of the second employee in the list, i.e. the one with index 1. One issue is a conceptual one: why are we only interested in that particular employee? What’s so special about him/her? This is not clear for anyone investigating the code. The second issue is that if we want to change the value of the index then we’ll need to do it in three places. If this particular index is important elsewhere as well then we’ll have to visit those places too and update the index.

We can solve both issues using the same simple techniques as in the previous example. Set a new constant in Constants.cs:

public class Constants
{
	public static readonly string Address = "Stockholm, Sweden";
	public static readonly string StandardFormat = "{0} is {1}, lives in {2}, age {3}";
	public static readonly int IndexOfMyFavouriteEmployee = 1;
}

Then introduce a new class level variable in Program.cs:

private static int indexOfMyFavouriteEmployee = Constants.IndexOfMyFavouriteEmployee;

The updated DoMagicInteger() method looks as follows:

private static void DoMagicInteger()
{
	List<Employee> employees = GetEmployees().ToList();
	if (employees.Count > 0)
	{
		Employee favouriteEmployee = employees[indexOfMyFavouriteEmployee];
		Console.WriteLine(string.Concat("Age: ", favouriteEmployee.Age, 
			", department: ", favouriteEmployee.Department
			, ", name: ", favouriteEmployee.Name));
	}
}

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 10: tests and conclusions

Introduction

In the previous post we’ve come as far as implementing all Get, Post, Put and Delete methods. We also tested the Get methods as the results could be viewed in the browser alone. In order to test the Post, Put and Delete methods we’ll need to do some more work.

Demo

There are various tools out there that can generate any type of HTTP calls for you where you can specify the JSON inputs, HTTP headers etc. However, we’re programmers, right? We don’t need no tools, we can write a simple one ourselves! Don’t worry, I only mean a GUI-less throw-away application that consists of a few lines of code, not a complete Fiddler.

Fire up Visual Studio and create a new Console application. Add a reference to the System.Net.Http library. Also, add a NuGet package reference to Json.NET:

Newtonsoft Json.NET in NuGet

System.Net.Http includes all objects necessary for creating Http messages. Json.NET will help us package the input parameters in the message body.

POST

Let’s start with testing the insertion method. Recall from the previous post that the Post method in CustomersController is expecting an InsertCustomerRequest object:

public HttpResponseMessage Post(CustomerPropertiesViewModel insertCustomerViewModel)

We’ll make this easy for us and create an identical CustomerPropertiesViewModel in the tester app so that the Json translation will be as easy as possible. So insert a class called CustomerPropertiesViewModel into the tester app with the following properties:

public string Name { get; set; }
public string AddressLine1 { get; set; }
public string AddressLine2 { get; set; }
public string City { get; set; }
public string PostalCode { get; set; }

Insert the following method in order to test the addition of a new customer:

private static void RunPostOperation()
{
	HttpRequestMessage requestMessage = new HttpRequestMessage(HttpMethod.Post, _serviceUri);
	requestMessage.Headers.ExpectContinue = false;
	CustomerPropertiesViewModel newCustomer = new CustomerPropertiesViewModel()
	{
		AddressLine1 = "New address"
		, AddressLine2 = string.Empty
		, City = "Moscow"
		, Name = "Awesome customer"
		, PostalCode = "123987"
	};
	string jsonInputs = JsonConvert.SerializeObject(newCustomer);
	requestMessage.Content = new StringContent(jsonInputs, Encoding.UTF8, "application/json");
	HttpClient httpClient = new HttpClient();
	httpClient.Timeout = new TimeSpan(0, 10, 0);
	Task<HttpResponseMessage> httpRequest = httpClient.SendAsync(requestMessage,
			HttpCompletionOption.ResponseContentRead, CancellationToken.None);
	HttpResponseMessage httpResponse = httpRequest.Result;
	HttpStatusCode statusCode = httpResponse.StatusCode;
	HttpContent responseContent = httpResponse.Content;
	if (responseContent != null)
	{
		Task<String> stringContentsTask = responseContent.ReadAsStringAsync();
		String stringContents = stringContentsTask.Result;
		Console.WriteLine("Response from service: " + stringContents);
	}
	Console.ReadKey();
}

…where _serviceUri is a private Uri:

private static Uri _serviceUri = new Uri("http://localhost:9985/customers");

This is the URL that’s displayed in the browser when you start the web API project. The port number may of course differ from yours, so adjust it accordingly.

This is of course not a very optimal method as it carries out a lot of things, but that’s beside the point right now. We only a need a simple tester, that’s all.

If you don’t know the HttpClient and the related objects in this method, then don’t worry, you’ve just learned something new. We set the web method to POST and assign the string contents to the JSONified version of the CustomerPropertiesViewModel object. We also set the request timeout to 10 minutes so that you don’t get a timeout exception as you slowly jump through the code in the web service in a bit. The application type is set to JSON so that the web service understands which media type converter to take. We then send the request to the web service using the SendAsynch method and wait for a reply.

Make sure to insert a call to this method from within Main.

Open CustomersController in the DDD skeleton project and set a breakpoint within the Post method here:

InsertCustomerResponse insertCustomerResponse = _customerService.InsertCustomer(new InsertCustomerRequest() { CustomerProperties = insertCustomerViewModel });

Start the skeleton project. Then run the tester. If everything went well then execution should stop at the breakpoint. Inspect the contents of the incoming insertCustomerViewModel parameter. You’ll see that the parameter properties have been correctly assigned by the JSON formatter.

From this point on I encourage you to step through the web service call with F11. You’ll see how the domain object is created and validated – including the Address property, how it is converted into the corresponding database type, how the IUnitOfWork implementation – InMemoryUnitOfWork – registers the insertion and how it is persisted. You’ll also see that all abstract dependencies have been correctly instantiated by StructureMap, we haven’t got any exceptions along the way which may point to some null dependency problem.

When the web service call returns the you should see that it returned an Exception == null property to the caller, i.e. the tester app. Now refresh the browser and you’ll see the new customer just inserted into memory.

Let’s try something else: assign an empty string to the Name property of the CustomerPropertiesViewModel object in the tester app. We know that a customer must have a name so we’re expecting some trouble. Run the tester app and you should see an exception being thrown at this code line:

throw new Exception(brokenRulesBuilder.ToString());

…within CustomerService.cs. This is because the BrokenRules list of the Customer domain contains one broken rule. Let the code execute and you’ll see in the tester app that we received the correct exception message:

Validation exception from web service

Now assign an empty string to the City property as we know that an Address must have a city. You’ll see a similar response:

Address must have a city validation exception

PUT

We’ll follow the same analogy as above in order to update a customer. Insert the following object into the tester app:

public class UpdateCustomerViewModel : CustomerPropertiesViewModel
{
	public int Id { get; set; }
}

Insert the following method into Program.cs:

private static void RunPutOperation()
{
	HttpRequestMessage requestMessage = new HttpRequestMessage(HttpMethod.Put, _serviceUri);
	requestMessage.Headers.ExpectContinue = false;
	UpdateCustomerViewModel updatedCustomer = new UpdateCustomerViewModel()
	{
		Id = 2
		, AddressLine1 = "Updated address line 1"
		, AddressLine2 = string.Empty
		, City = "Updated city"
		, Name = "Updated customer name"
		, PostalCode = "0988765"
	};

	string jsonInputs = JsonConvert.SerializeObject(updatedCustomer);
	requestMessage.Content = new StringContent(jsonInputs, Encoding.UTF8, "application/json");
	HttpClient httpClient = new HttpClient();
	httpClient.Timeout = new TimeSpan(0, 10, 0);
	Task<HttpResponseMessage> httpRequest = httpClient.SendAsync(requestMessage,
			HttpCompletionOption.ResponseContentRead, CancellationToken.None);
	HttpResponseMessage httpResponse = httpRequest.Result;
	HttpStatusCode statusCode = httpResponse.StatusCode;
	HttpContent responseContent = httpResponse.Content;
	if (responseContent != null)
	{
		Task<String> stringContentsTask = responseContent.ReadAsStringAsync();
		String stringContents = stringContentsTask.Result;
		Console.WriteLine("Response from service: " + stringContents);
	}
	Console.ReadKey();
}

This is almost identical to the Post operation method except that we set the http verb to PUT and we send a JSONified UpdateCustomerViewModel object to the service. You’ll notice that we want to update the customer with id = 2. Comment out the call to RunPostOperation() in Main and add a new call to RunPutOperation() instead. Set a breakpoint within the Put method in CustomersController of the skeleton web service. Jump through the code execution with F11 as before and follow along how the customer is found and updated. Refresh the browser to see the updated values of Customer with id = 2. Run the same test as above: set the customer name to an empty string and try to update the resource. The request should fail in the validation phase.

DELETE

The delete method is a lot simpler as we only need to send an id to the web Delete method of the web service:

private static void RunDeleteOperation()
{
	HttpRequestMessage requestMessage = new HttpRequestMessage(HttpMethod.Delete, string.Concat(_serviceUri, "/3"));
	requestMessage.Headers.ExpectContinue = false;
	HttpClient httpClient = new HttpClient();
	httpClient.Timeout = new TimeSpan(0, 10, 0);
	Task<HttpResponseMessage> httpRequest = httpClient.SendAsync(requestMessage,
			HttpCompletionOption.ResponseContentRead, CancellationToken.None);
	HttpResponseMessage httpResponse = httpRequest.Result;
	HttpStatusCode statusCode = httpResponse.StatusCode;
	HttpContent responseContent = httpResponse.Content;
	if (responseContent != null)
	{
		Task<String> stringContentsTask = responseContent.ReadAsStringAsync();
		String stringContents = stringContentsTask.Result;
		Console.WriteLine("Response from service: " + stringContents);
	}
	Console.ReadKey();
}

You’ll see that we set the http method to DELETE and extended to service Uri to show that we’re talking about customer #3.

Set a breakpoint within the Delete method in CustomersController.cs and as before step through the code execution line by line to see how the deletion is registered and persisted by IUnitOfWork.

Analysis and conclusions

That actually completes the first version of the skeleton project. Let’s see if we’re doing any better than the tightly coupled solution of the first installment of this series.

Dependency graph

If you recall from the first part of this series then the greatest criticism against the technology-driven layered application was that all layers depended on the repository layer and that EF-related objects permeated the rest of the application.

We’ll now look at an updated dependency graph. Before we do that there’s a special group of dependency in the skeleton app that slightly distorts the picture: the web layer references all other layers for the sake of StructureMap. StructureMap needs some hints as to where to look for implementations so we had to set a library reference to all other layers. This is not part of any business logic so a true dependency graph should, I think, not consider this set of links. If you don’t like this coupling then it’s perfectly reasonable to create a separate very thin layer for StructureMap and let that layer reference infra, repo, domains and services.

If we start from the top then we see that the web layer talks to the service layer through the ICustomerService interface. The ICustomerService interface uses RequestResponse objects to communicate with the outside world. Any implementation of ICustomerService will communicate through those objects so their use within CustomersController is acceptable as well. The Service doesn’t spit out Customer domain objects directly. Instead it wraps CustomerViewModels that are wrapped within the corresponding GetCustomersResponse object. Therefore I think we can conclude that the Web layer only depends on the service layer and that this coupling is fairly loose.

The application service layer has a reference to the Infrastructure layer – through the IUnitOfWork interface – and the Domain layer. In a full-blown project there will be more links to the Infrastructure layer – logging, caching, authentication etc. – but as longs as you hide those concerns behind abstractions you’ll be fine. The Customer repository is only propagated in the form of an interface – ICustomerRepository. Otherwise the domain objects are allowed to bubble up to the Service layer as they are the central elements of the application.

The Domain layer has a dependency on the Infrastructure layer through abstractions such as EntityBase, IAggregateRoot and IRepository. That’s all fine and good. The only coupling that’s tighter than these is the BusinessRule object where we construct a new BusinessRule in the CustomerBusinessRule class. In retrospect it may have been a better idea to have an abstract BusinessRule class in the infrastructure layer with concrete implementations of it in the domain layer.

The repository layer has a reference to the Infrastructure layer – again through abstractions such as IAggregateRoot and IUnitOfWorkRepository – and the Domain layer. Notice that we changed the direction of the dependency compared to the how-not-to-do mini-project of the first post in this series: the domain layer does not depend on the repository but the repository depends on the domain layer.

Here’s the updated dependency graph:

DDD improved dependency graph

I think the most significant change compared to where we started is that no single layer is directly or indirectly dependent on the concrete repository layer. You can test and unload the project from the solution – right-click, select Unload Project. There will be a broken reference in the Web layer that only exists for the sake of StructureMap, but otherwise the solution survives this “amputation”. We have successfully hidden the most technology-driven layer behind an abstraction that can be replaced at ease. You need to implement the IUnitOfWork and IUnitOfWorkRepository interfaces and you should be fine. You can then instruct StructureMap to use the new implementation instead. You can even switch between two or more different implementations to test how the different technologies work before you go for a specific one in your project. Of course writing those implementations may not be a trivial task but switching from one technology to another will certainly be.

Another important change is that the domain layer is now truly the central one in the solution. The services and data access layers directly reference it. The UI layer depends on it indirectly through the service layer.

That’s all folks. I hope you have learned new things and can use this skeleton solution in some way in your own project.

The project has been extended. Read the first extension here.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 9: the Web layer

Introduction

We’re now ready to build the ultimate consumer of the application, the web layer. As we said before this layer can be any type of presentation layer: MVC, a web service interface, WPF, a console app, you name it. The backend design that we’ve built up so far should be flexible enough to support either a relatively simple switch of UI type or the addition of new types. You may want to have several multiple entry points into your business app so it’s good if they can rely on the same foundations. Otherwise you may need to build different apps just to support different presentation methods, which is far from optimal.

Here we’ll build a web service layer powered by Web API. If you don’t know what it is you can read about it on its official homepage. Here comes a summary:

In short the Web API is a technology by Microsoft to build HTTP-based web services. Web Api uses the standard RESTful way of building a web service with no SOAP overhead; only plain old HTTP messages are exchanged between the client and the server. The client sends a normal HTTP request to a web service with some URI and receives a HTTP response in return.

The technology builds heavily on MVC: it has models and controllers just like a normal MVC web site. It lacks any views of course as a web service provides responses in JSON, XML, plain text etc., not HTML. There are some other important differences:

  • The actions in the controllers do not return Action Results: they can return any string based values and HttpResponse objects
  • The controllers derive from the ApiController class, not the Controller class as in standard MVC

The routing is somewhat different:

  • In standard MS MVC the default routing may look as follows: controller/action/parameters
  • In Web API the ‘action’ is omitted by default: Actions will be routed based on the HTTP verb of the incoming HTTP message: GET, POST, PUT, DELETE, HEAD, OPTIONS
  • The action method signatures follow this convention: Get(), Get(int id), Post(), Put(), Delete(int id)
  • As long as you keep to this basic convention the routing will work without changing the routing in WebApiConfig.cs in the App_Start folder

Routing example: say that the client wants to get data on a customer with id 23. They will send a GET request to our web service with the following URI: http://www.api.com/customers/21. The Web API routing engine will translate this into a Get(int id) method within the controller called CustomersController. If however they want to delete this customer then he will send a DELETE request to the same URI and the routing engine will try to find a Delete(int id) method in CustomersController.

In other words: the supported HTTP verbs have a corresponding method in the correct controller. If a resource does not support a specific verb, e.g. a Customer cannot be deleted, then just omit the Delete(int id) method in the CustomersController and Web API will return a HTTP exception message saying that there’s no suitable method.

The basic convention allows some freedom of naming your action methods. Get, Post etc. can be named Get[resource], Post[resource], e.g. GetCustomer, PostCustomer, DeleteCustomer and the routing will still work. If for any reason you don’t like the default naming conventions you can still use the standard HttpGet, HttpPost type of attributes known from MS MVC.

I won’t concentrate on the details of Web API in this post. If there’s something you don’t understand along the way then make sure to check out the link provided above.

We’ll also see how the different dependencies can be injected into the services, repositories and other items that are dependent upon abstractions. So far we have diligently given room for injecting dependencies according to the letter ‘D‘ in SOLID, like here:

public CustomerService(ICustomerRepository customerRepository, IUnitOfWork unitOfWork)

However, at some point we’ll need to inject these dependencies, right? We could follow poor man’s DI by constructing a new CustomerRepository and a new InMemoryUnitOfWork object as they implement the necessary ICustomerRepository and IUnitOfWork interfaces. However, modern applications use one of the many available Inversion-of-Control containers to take care of these plumbing tasks. In our case we’ll use StructureMap which is quite common to and works very well with .NET projects. IoCs can be difficult to grasp at first as they seem to do a lot of magic, but don’t worry much about them. StructureMap can do a lot for you without having to dig deep into the details on how it works as it’s easy to install and get started with using NuGet.

The web layer

Add a new Web API project by taking the following steps.

1. Add new project

2. Select Web/MVC 4 web application:

Add new MVC project

Call it DDDSkeletonNET.Portal.WebService.

3. Select Web API in the Project template:

Web API in project template

The result is actually a mix between an MVC and a Web API project. Take a look into the Controllers folder. It includes a HomeController which derives from the MVC Controller class and a ValuesController which derives from the Web API ApiController. The project also includes images, views and routing related to MVC. The idea is that MVC views can also consume Web API controllers. Ajax calls can also be directed towards Web API controllers. However, our goal is to have a pure web service layer so let’s clean this up a little:

  • Delete the Content folder
  • Delete the Images folder
  • Delete the Views folder
  • Delete the Scripts folder
  • Delete both controllers from the Controllers folder
  • Delete favicon.ico
  • Delete BundleConfig.cs from the App_Start folder
  • Delete RouteConfig.cs from the App_Start folder
  • Delete the Models folder
  • Delete the Areas folder
  • In WebApiConfig.Register locate the routeTemplate parameter. It says “api/…” by default. Remove the “api/” bit so that it says {controller}/{id}
  • Locate Global.asax.cs. It is trying to call RouteConfig and BundleConfig – remove those calls

The WebService layer should be very slim at this point with only a handful of folders and files. Right-click the Controllers folder and add a new empty API controller called CustomersController:

Add api controller to web api layer

We’ll need to have an ICustomerService object in the CustomersController so add a reference to the ApplicationServices layer.

Add the following private backing field and constructor to CustomersController:

private readonly ICustomerService _customerService;

public CustomersController(ICustomerService customerService)
{
	if (customerService == null) throw new ArgumentNullException("CustomerService in CustomersController");
	_customerService = customerService;
}

We’ll want to return Http messages only. Http responses are represented by the HttpResponseMessage object. As all our contoller methods will return a HttpResponseMessage we can assign an extension method to build these messages. Insert a folder called Helpers in the web layer. Add the following extension method to it:

public static class HttpResponseBuilder
{
	public static HttpResponseMessage BuildResponse(this HttpRequestMessage requestMessage, ServiceResponseBase baseResponse)
	{
		HttpStatusCode statusCode = HttpStatusCode.OK;
		if (baseResponse.Exception != null)
		{
			statusCode = baseResponse.Exception.ConvertToHttpStatusCode();
			HttpResponseMessage message = new HttpResponseMessage(statusCode);
			message.Content = new StringContent(baseResponse.Exception.Message);
			throw new HttpResponseException(message);
		}
		return requestMessage.CreateResponse<ServiceResponseBase>(statusCode, baseResponse);
	}
}

We’re extending the HttpRequestMessage object which represents the http request coming to our web service. We build a response based on the Response we received from the service layer. We assume that the http status is OK (200) but if there’s been any exception then we adjust that status and throw a HttpResponseException exception. Make sure to set the namespace to DDDSkeletonNET.Portal so that the extension is visible anywhere in the project without having to add using statements.

ConvertToHttpStatusCode() is also an extension method. Add another class called ExceptionDictionary to the Helpers folder:

public static class ExceptionDictionary
{
	public static HttpStatusCode ConvertToHttpStatusCode(this Exception exception)
	{
		Dictionary<Type, HttpStatusCode> dict = GetExceptionDictionary();
		if (dict.ContainsKey(exception.GetType()))
		{
			return dict[exception.GetType()];
		}
		return dict[typeof(Exception)];
	}

	private static Dictionary<Type, HttpStatusCode> GetExceptionDictionary()
	{
		Dictionary<Type, HttpStatusCode> dict = new Dictionary<Type, HttpStatusCode>();
		dict[typeof(ResourceNotFoundException)] = HttpStatusCode.NotFound;
		dict[typeof(Exception)] = HttpStatusCode.InternalServerError;
		return dict;
	}
}

Here we maintain a dictionary of Exception/HttpStatusCode pairs. It would be nicer of course to read this directly from the Exception object possibly through an Adapter but this solution is OK for now.

Let’s implement the get-all-customers method in CustomersController. So we’ll need a Get method without any parameters that corresponds to the /customers route. That should be as easy as the following:

public HttpResponseMessage Get()
{
	ServiceResponseBase resp = _customerService.GetAllCustomers();
	return Request.BuildResponse(resp);
}

We ask the service to retrieve all customers and convert that response into a HttpResponseMessage object.

We cannot yet use this controller as ICustomerService is null, there’s no concrete type behind it yet within the controller. This is where StructureMap enters the scene. Open the Manage NuGet Packages window and install the following package:

Install StructureMap IoC in web api layer

The installer adds a couple of new files to the web service layer:

  • 3 files in the DependencyResolution folder
  • StructuremapMvc.cs in the App_Start folder

The only file we’ll consider in any detail is IoC.cs in the DependencyResolution folder. Don’t worry about the rest, they are not important to our main discussion. Here’s a short summary:

StructuremapMvc was auto-generated by the StructureMap NuGet package and it can safely be ignored, it simply works.

DependencyResolution folder: IoC.cs is important to understand, the other auto-generated classes can be ignored. In IoC.cs we declare which concrete types we want StructureMap to inject in place of the abstractions. If you are not familiar with IoC containers then you may wonder how ICustomerService is injected in CustomerController and how ICustomerRepository is injected in CustomerService. This is achieved automagically through StructureMap and IoC.cs is where we instruct it where to look for concrete types and in special cases tell it explicitly which concrete type to take.

StructureMap follows a simple built-in naming convention: if it encounters an interface starting with an ‘I’ it will look for a concrete type with the same name without the ‘I’ in front. Example: if it sees that an ICustomerService interface is needed then it will try to fetch a CustomerService object. This is expressed by the scan.WithDefaultConventions() call. It is easy to register new naming conventions for StructureMap if necessary – let me know in the comment section if you need any code sample.

We also need to tell StructureMap where to look for concrete types. It won’t automatically find the implementations of our abstractions, we need to give it some hints. We can declare this in the calls to scan.AssemblyContainingType of type T. Example: scan.AssemblyContainingType() of type CustomerRepository means that StructureMap should go and look in the assembly which contains the CustomerRepository object. Note that this does not mean that CustomerRepository must be injected at all times. It simply says that StructureMap will look in that assembly for concrete implementations of an abstraction. I could have picked any object from that assembly, it doesn’t matter. So these calls tell StructureMap to look in each assembly that belong to the same solution.

There are cases where the standard naming convention is not enough. Then you can explicitly tell StructureMap which concrete type to inject. Example: x.For()[abstraction].Use()[implementation]; means that if StructureMap sees a dependency on ‘abstraction’ then it should inject a new ‘implementation’ type.

ObjectFactory.AssertConfigurationIsValid() will make sure that an exception is thrown during project start-up if StructureMap sees a dependency for which it cannot find any suitable implementation.

Update the Initialize() method in IoC.cs to the following:

public static IContainer Initialize()
{
	ObjectFactory.Initialize(x =>
	{
        	x.Scan(scan =>
		{
	        	scan.TheCallingAssembly();
			scan.AssemblyContainingType<ICustomerRepository>();
		        scan.AssemblyContainingType<CustomerRepository>();
			scan.AssemblyContainingType<ICustomerService>();
			scan.AssemblyContainingType<BusinessRule>();
			scan.WithDefaultConventions();
		});
		x.For<IUnitOfWork>().Use<InMemoryUnitOfWork>();
		x.For<IObjectContextFactory>().Use<LazySingletonObjectContextFactory>();
	});
	ObjectFactory.AssertConfigurationIsValid();
	return ObjectFactory.Container;
}

You’ll need to reference all other layers from the Web layer for this to work. We’re telling StructureMap to scan all other assemblies by declaring all types that are contained in those assemblies – again, I could have picked ANY type from the other projects, so don’t get hung up on questions like “Why did he choose BusinessRule?”. These calls will make sure that the correct implementations will be found based on the default naming convention mentioned above. There are two cases where this convention is not enough: IUnitOfWork and IObjectContextFactory. Here we use the For and Use extension methods to declare exactly what we need. Finally we want to assert that all implementations have been found. You can test it for yourself: comment out the line on IUnitOfWork, start the application – make sure to set the web layer as the startup project – and you should get a long exception message, here’s the gist of it:

StructureMap.Exceptions.StructureMapConfigurationException was unhandled by user code
No Default Instance defined for PluginFamily DDDSkeletonNET.Infrastructure.Common.UnitOfWork.IUnitOfWork

StructureMap couldn’t resolve the IUnitOfWork dependency so it threw an error.

Open the properties window of the web project and specify the route to customers:

Specify starting route in properties window

Set a breakpoint at this line in CustomersController:

ServiceResponseBase resp = _customerService.GetAllCustomers();

…and press F5. Execution should stop at the break point. Hover over _customerService with the mouse to check the status of the dependency. You’ll see it is not null, so StructureMap has correctly found and constructed a CustomerService object for us. Step through the code with F11 to see how it is all connected. You’ll see that all dependencies have been resolved correctly.

However, at the end of the loop, when the 3 customers that were retrieved from memory should be presented, we get the following exception:

The ‘ObjectContent`1’ type failed to serialize the response body for content type ‘application/xml; charset=utf-8’.

Open WebApiConfig and add the following lines of code to the Register method:

var json = config.Formatters.JsonFormatter;
json.SerializerSettings.PreserveReferencesHandling = Newtonsoft.Json.PreserveReferencesHandling.Objects;
config.Formatters.Remove(config.Formatters.XmlFormatter);

This will make sure that we return our responses in JSON format.

Re-run the app and you should see some JSON on your browser:

Json response from get all customers

Yaaay, after much hard work we’re getting somewhere at last! How can we retrieve a customer by id? Add the following overloaded Get method to the Customers controller:

public HttpResponseMessage Get(int id)
{
	ServiceResponseBase resp = _customerService.GetCustomer(new GetCustomerRequest(id));
	return Request.BuildResponse(resp);
}

Run the application and enter the following route in the URL window: customers/1. You should see that the customer with id 1 is returned:

Get one customer JSON response

Now try this with an ID that you know does not exist, such as customers/5. An exception will be thrown in the application. Let the execution continue and you should see the following exception message in your web browser:

Resource not found JSON

This is the message we set in the code if you recall.

What if we want to format the data slightly differently? It’s good that we have a customer view model and request-response objects where we are free to change what we want without modifying the corresponding domain object. Open the application services layer and add a reference to the System.Runtime.Serialization library. Modify the CustomerViewModel object as follows:

[DataContract]
public class CustomerViewModel
{
	[DataMember(Name="Customer name")]
	public string Name { get; set; }
	[DataMember(Name="Address")]
	public string AddressLine1 { get; set; }
	public string AddressLine2 { get; set; }
	[DataMember(Name="City")]
	public string City { get; set; }
	[DataMember(Name="Postal code")]
	public string PostalCode { get; set; }
	[DataMember(Name="Customer id")]
	public int Id { get; set; }
}

Re-run the application and navigate to customers/1. You should see the updated property names:

Data member and data contract attribute

You can decorate the Response objects as well with these attributes.

This was a little change in the property names only but feel free to add extra formats to the view model, it’s perfectly fine.

We’re missing the insert, update and delete methods. Let’s implement them here and we’ll test them in the next post.

As far as I’ve seen there’s a bit of confusion over how the web methods PUT and POST are supposed to be used in web requests. DELETE is clear, we want to delete a resource. GET is also straightforward. However, PUT and POST are still heavily debated. This post is not the time and place to decide once and for all what their roles are, so I’ll take the following approach:

  • POST: insert a resource
  • PUT: update a resource

Here come the implementations:

public HttpResponseMessage Post(CustomerPropertiesViewModel insertCustomerViewModel)
		{
			InsertCustomerResponse insertCustomerResponse = _customerService.InsertCustomer(new InsertCustomerRequest() { CustomerProperties = insertCustomerViewModel });
			return Request.BuildResponse(insertCustomerResponse);
		}

public HttpResponseMessage Put(UpdateCustomerViewModel updateCustomerViewModel)
{
	UpdateCustomerRequest req =
		new UpdateCustomerRequest(updateCustomerViewModel.Id)
		{
			CustomerProperties = new CustomerPropertiesViewModel()
			{
				AddressLine1 = updateCustomerViewModel.AddressLine1
				,AddressLine2 = updateCustomerViewModel.AddressLine2
				,City = updateCustomerViewModel.City
				,Name = updateCustomerViewModel.Name
				,PostalCode = updateCustomerViewModel.PostalCode
			}
		};
	UpdateCustomerResponse updateCustomerResponse =	_customerService.UpdateCustomer(req);
	return Request.BuildResponse(updateCustomerResponse);
}

public HttpResponseMessage Delete(int id)
{
	DeleteCustomerResponse deleteCustomerResponse = _customerService.DeleteCustomer(new DeleteCustomerRequest(id));
	return Request.BuildResponse(deleteCustomerResponse);
}

…where UpdateCustomerViewModel derives from CustomerPropertiesViewModel:

public class UpdateCustomerViewModel : CustomerPropertiesViewModel
{
	public int Id { get; set; }
}

We’ll test these in the next post where we’ll also draw the conclusions of what we have achieved to finish up the series.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 8: the concrete Service

We’ll continue where we left off in the previous post. It’s time to implement the first concrete service in the skeleton application: the CustomerService.

Open the project we’ve been working on in this series. Locate the ApplicationServices layer and add a new folder called Implementations. Add a new class called CustomerService which implements the ICustomerService interface we inserted in the previous post. The initial skeleton will look like this:

public class CustomerService : ICustomerService
{
	public GetCustomerResponse GetCustomer(GetCustomerRequest getCustomerRequest)
	{
		throw new NotImplementedException();
	}

	public GetCustomersResponse GetAllCustomers()
	{
		throw new NotImplementedException();
	}

	public InsertCustomerResponse InsertCustomer(InsertCustomerRequest insertCustomerRequest)
	{
		throw new NotImplementedException();
	}

	public UpdateCustomerResponse UpdateCustomer(UpdateCustomerRequest updateCustomerRequest)
	{
		throw new NotImplementedException();
	}

	public DeleteCustomerResponse DeleteCustomer(DeleteCustomerRequest deleteCustomerRequest)
	{
		throw new NotImplementedException();
	}
}

We know that the service will need some repository to retrieve the requested records. Which repository is it? The abstract one of course: ICustomerRepository. It represents all operations that the consumer is allowed to do in the customer repository. The service layer doesn’t care about the exact implementation of this interface.

We’ll also need a reference to the unit of work which will maintain and persist the changes we make. Again, we’ll take the abstract IUnitOfWork object.

These abstractions must be injected into the customer service class through its constructor. You can read about constructor injection and the other types of dependency injection here.

Add the following backing fields and the constructor to CustomerService.cs:

private readonly ICustomerRepository _customerRepository;
private readonly IUnitOfWork _unitOfWork;

public CustomerService(ICustomerRepository customerRepository, IUnitOfWork unitOfWork)
{
	if (customerRepository == null) throw new ArgumentNullException("Customer repo");
	if (unitOfWork == null) throw new ArgumentNullException("Unit of work");
	_customerRepository = customerRepository;
	_unitOfWork = unitOfWork;
}

Let’s implement the GetCustomer method first:

public GetCustomerResponse GetCustomer(GetCustomerRequest getCustomerRequest)
{
	GetCustomerResponse getCustomerResponse = new GetCustomerResponse();
	Customer customer = null;
	try
	{
		customer = _customerRepository.FindBy(getCustomerRequest.Id);
		if (customer == null)
		{
			getCustomerResponse.Exception = GetStandardCustomerNotFoundException();
		}
                else
		{
			getCustomerResponse.Customer = customer.ConvertToViewModel();
		}
	}
	catch (Exception ex)
	{
		getCustomerResponse.Exception = ex;
	}
	return getCustomerResponse;
}

…where GetStandardCustomerNotFoundException() looks like this:

private ResourceNotFoundException GetStandardCustomerNotFoundException()
{
	return new ResourceNotFoundException("The requested customer was not found.");
}

…where ResourceNotFoundException looks like the following:

public class ResourceNotFoundException : Exception
{
	public ResourceNotFoundException(string message)
		: base(message)
	{}

	public ResourceNotFoundException()
		: base("The requested resource was not found.")
	{}
}

There’s nothing too complicated in the GetCustomer method I hope. Note that we use the extension method ConvertToViewModel() we implemented in the previous post to return a customer view model. We call upon the FindBy method of the repository to locate the resource and save any exception thrown along the way.

GetAllCustomers is equally simple:

public GetCustomersResponse GetAllCustomers()
{
	GetCustomersResponse getCustomersResponse = new GetCustomersResponse();
	IEnumerable<Customer> allCustomers = null;

	try
	{
		allCustomers = _customerRepository.FindAll();
                getCustomersResponse.Customers = allCustomers.ConvertToViewModels();
	}
	catch (Exception ex)
	{
		getCustomersResponse.Exception = ex;
	}	
	return getCustomersResponse;
}

In the InsertCustomer method we create a new Customer domain object, validate it, call the repository to insert the object and finally call the unit of work to commit the changes:

public InsertCustomerResponse InsertCustomer(InsertCustomerRequest insertCustomerRequest)
{
	Customer newCustomer = AssignAvailablePropertiesToDomain(insertCustomerRequest.CustomerProperties);
	ThrowExceptionIfCustomerIsInvalid(newCustomer);
	try
	{
		_customerRepository.Insert(newCustomer);				
		_unitOfWork.Commit();
		return new InsertCustomerResponse();
	}
	catch (Exception ex)
	{
		return new InsertCustomerResponse() { Exception = ex };
	}
}

…where AssignAvailablePropertiesToDomain looks like this:

private Customer AssignAvailablePropertiesToDomain(CustomerPropertiesViewModel customerProperties)
{
	Customer customer = new Customer();
	customer.Name = customerProperties.Name;
	Address address = new Address();
	address.AddressLine1 = customerProperties.AddressLine1;
	address.AddressLine2 = customerProperties.AddressLine2;
	address.City = customerProperties.City;
	address.PostalCode = customerProperties.PostalCode;
	customer.CustomerAddress = address;
	return customer;
}

So we simply dress up a new Customer domain object based on the properties of the incoming CustomerPropertiesViewModel object. In the ThrowExceptionIfCustomerIsInvalid method we validate the Customer domain:

private void ThrowExceptionIfCustomerIsInvalid(Customer newCustomer)
{
	IEnumerable<BusinessRule> brokenRules = newCustomer.GetBrokenRules();
	if (brokenRules.Count() > 0)
	{
		StringBuilder brokenRulesBuilder = new StringBuilder();
		brokenRulesBuilder.AppendLine("There were problems saving the LoadtestPortalCustomer object:");
		foreach (BusinessRule businessRule in brokenRules)
		{
			brokenRulesBuilder.AppendLine(businessRule.RuleDescription);
		}

		throw new Exception(brokenRulesBuilder.ToString());
	}
}

Revisit the post on EntityBase and the Domain layer if you’ve forgotten what the BusinessRule object and the GetBrokenRules() method are about.

In the UpdateCustomer method we first check if the requested Customer object exists. Then we change its properties based on the incoming UpdateCustomerRequest object. The process after that is the same as in the case of InsertCustomer:

public UpdateCustomerResponse UpdateCustomer(UpdateCustomerRequest updateCustomerRequest)
{
	try
	{
		Customer existingCustomer = _customerRepository.FindBy(updateCustomerRequest.Id);
		if (existingCustomer != null)
		{
			Customer assignableProperties = AssignAvailablePropertiesToDomain(updateCustomerRequest.CustomerProperties);
			existingCustomer.CustomerAddress = assignableProperties.CustomerAddress;
			existingCustomer.Name = assignableProperties.Name;
			ThrowExceptionIfCustomerIsInvalid(existingCustomer);
			_customerRepository.Update(existingCustomer);
			_unitOfWork.Commit();
			return new UpdateCustomerResponse();
		}
		else
		{
			return new UpdateCustomerResponse() { Exception = GetStandardCustomerNotFoundException() };
		}
	}
	catch (Exception ex)
	{
		return new UpdateCustomerResponse() { Exception = ex };
	}
}

In the DeleteCustomer method we again first retrieve the object to see if it exists:

public DeleteCustomerResponse DeleteCustomer(DeleteCustomerRequest deleteCustomerRequest)
{
	try
	{
		Customer customer = _customerRepository.FindBy(deleteCustomerRequest.Id);
		if (customer != null)
		{
			_customerRepository.Delete(customer);
			_unitOfWork.Commit();
			return new DeleteCustomerResponse();
		}
		else
		{
			return new DeleteCustomerResponse() { Exception = GetStandardCustomerNotFoundException() };
		}
	}
	catch (Exception ex)
	{
		return new DeleteCustomerResponse() { Exception = ex };
	}
}

That should be it really, this is the implementation of the CustomerService class.

In the next post we’ll start building the ultimate consumer of the application: the web layer which in this case will be a Web API web service. However, it could equally be a Console app, a WPF desktop app, a Silverlight app or an MVC web app, etc. It’s up to you what type of interface you build upon the backend skeleton.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 6: the concrete Repository continued

Introduction

In the previous post we laid the foundation for the concrete domain-specific repositories. We implemented the IUnitOfWork and IUnitOfWorkRepository interfaces and created an abstract Repository class. It’s time to see how we can implement the Customer repository. We’ll continue where we left off in the previous post.

The concrete Customer repository

Add a new folder called Repositories in the Repository.Memory data access layer. Add a new class called CustomerRepository with the following stub:

public class CustomerRepository : Repository<Customer, int, DatabaseCustomer>, ICustomerRepository
{
         public CustomerRepository(IUnitOfWork unitOfWork, IObjectContextFactory objectContextFactory) :  base(unitOfWork, objectContextFactory)
	{}

	public override Customer FindBy(int id)
	{
		throw new NotImplementedException();
	}

	public override DatabaseCustomer ConvertToDatabaseType(Customer domainType)
	{
		throw new NotImplementedException();
	}

	public IEnumerable<Customer> FindAll()
	{
		throw new NotImplementedException();
	}
}

You’ll need to add a reference to the Domain layer for this to compile.

We derive from the abstract Repository class and declare that the object is represented by the Customer class in the domain layer and by the DatabaseCustomer in the data storage layer and has an id type int. We inherit the FindBy, the ConvertToDatabaseType and the FindAll methods. FindAll() is coming indirectly from the IReadOnlyRepository interface which is implemented by IRepository which in turn is implemented by ICustomerRepository. And where are the three methods of IRepository? Remember, that the Update, Insert and Delete methods have already been implemented in the Repository class so we don’t need to worry about them. Any time we create a new domain-specific repository, those methods have been taken care of.

Let’s implement the methods one by one. In the FindBy(int id) method we’ll need to consult the DatabaseCustomers collection first and get the DatabaseCustomer object with the incoming id. Then we’ll populate the Customer domain object from its database representation. The DatabaseCustomers collection can be retrieved using the IObjectContextFactory dependency of the base class. However, its backing field is private, so let’s add the following read-only property to Repository.cs:

public IObjectContextFactory ObjectContextFactory
{
	get
	{
		return _objectContextFactory;
	}
}

The FindBy(int id) method can be implemented as follows:

public override Customer FindBy(int id)
{
	DatabaseCustomer databaseCustomer = (from dc in ObjectContextFactory.Create().DatabaseCustomers
										 where dc.Id == id
										 select dc).FirstOrDefault();
	Customer customer = new Customer()
	{
		Id = databaseCustomer.Id
		,Name = databaseCustomer.CustomerName
		,CustomerAddress = new Address()
		{
			AddressLine1 = databaseCustomer.Address
			,AddressLine2 = string.Empty
			,City = databaseCustomer.City
			,PostalCode = "N/A"
		}
	};
	return customer;
}

So we populate the Customer object based on the properties of the DatabaseCustomer object. We don’t store the postal code or address line 2 in the DB so they are not populated. In a real life scenario we’d probably need to rectify this by extending the DatabaseCustomer table, but for now we don’t care. This example shows again the advantage of having complete freedom over the database and domain representations of a domain. You have the freedom of populating the domain object from its underlying database representation. You can even consult other database objects if needed as the domain object properties may be dispersed across 2-3 or even more database tables. The domain object won’t be concerned with such details. The domain and the database are completely detached so you don’t have to worry about changing either the DB or the domain representation.

Let’s factor out the population process to a separate method as it will be needed later:

private Customer ConvertToDomain(DatabaseCustomer databaseCustomer)
{
	Customer customer = new Customer()
	{
		Id = databaseCustomer.Id
		,Name = databaseCustomer.CustomerName
		,CustomerAddress = new Address()
		{
			AddressLine1 = databaseCustomer.Address
			,AddressLine2 = string.Empty
			,City = databaseCustomer.City
			,PostalCode = "N/A"
		}
	};
	return customer;
}

The update version of FindBy(int id) looks as follows:

public override Customer FindBy(int id)
{
	DatabaseCustomer databaseCustomer = (from dc in ObjectContextFactory.Create().DatabaseCustomers
										 where dc.Id == id
										 select dc).FirstOrDefault();
	if (databaseCustomer != null)
	{
		return ConvertToDomain(databaseCustomer);
	}
	return null;
}

The ConvertToDatabaseType(Customer domainType) method will be used when inserting and modifying domain objects:

public override DatabaseCustomer ConvertToDatabaseType(Customer domainType)
{
	return new DatabaseCustomer()
	{
		Address = domainType.CustomerAddress.AddressLine1
		,City = domainType.CustomerAddress.City
		,Country = "N/A"
		,CustomerName = domainType.Name
		,Id = domainType.Id
		,Telephone = "N/A"
	};
}

Nothing fancy here I suppose.

Finally FindAll simply retrieves all customers from the database and converts each to a domain type:

public IEnumerable<Customer> FindAll()
{
	List<Customer> allCustomers = new List<Customer>();
	List<DatabaseCustomer> allDatabaseCustomers = (from dc in ObjectContextFactory.Create().DatabaseCustomers
						   select dc).ToList();
	foreach (DatabaseCustomer dc in allDatabaseCustomers)
	{
		allCustomers.Add(ConvertToDomain(dc));
	}
	return allCustomers;
}

That’s it, there’s at present nothing more to add the CustomerRepository class. If you add any specialised queries in the ICustomerRepository interface they will need to be implemented here.

I’ll finish this post with a couple of remarks on EntityFramework.

Entity Framework

There is a way to implement the Unit of Work pattern in conjunction with the EntityFramework in a different, more simplified way. The database type is completely omitted from the structure leading to the following Repository abstract class declaration:

public abstract class Repository<DomainType, IdType> : IUnitOfWorkRepository where DomainType : IAggregateRoot

So there’s no trace of how the domain is represented in the database. There is a way in the EF designer to map the columns of a table to the properties of an object. You can specify the namespace of the objects created by EF – the .edmx file – like this:

1. Remove the code generation tool from the edmx file properties:

Remove EF code generation tool

2. Change the namespace of the project by right-clicking anywhere on the EF diagram and selecting Properties:

Change namespace in entity framework

Change that value to the namespace of your domain objects.

3. If the ID of a domain must be auto-generated by the database then you’ll need to set the StoreGeneratedPattern to Identity on that field:

EF identity generation

You can fine-tune the mapping by opening the .edmx file in a text editor. It is standard XML so you can edit it as long as you know what you are doing.

This way we don’t need to declare the domain type and the database type, we can work with only one type because it will be common to EF and the domain model. It is reasonable to follow down this path as it simplifies certain things, e.g. you don’t need the conversions between DB and domain types. However, there are some things to keep in mind I believe.

We inherently couple the domain object to its database counterpart. It may work with simple domain objects, but not with more complex objects where the database representation can be spread out in different tables. We may even face a different scenario: say we made a design mistake in the database structuring phase and built a table that represents more than one domain object. We may not have the possibility of rebuilding that table as it contains millions of rows and is deployed in a production environment. How do we then map 2 or more domain objects to the same database table in the EF designer? It won’t work, at least I don’t know any way how this can be solved. Also, mapping stored procedures to domain objects or domain object rules can be problematic. What’s more, you’ll need to mark those properties of your domains virtual where you want to allow lazy loading by the ORM framework like this:

public class Book
{
    public int Id { get; set; }
    public virtual Title Title { get; set; }
}

I think this breaks the persistence ignorance (PI) feature of proper POCO classes. We’re modifying the domain to give way for a persistence technology.

However, you may be OK with these limitations. Your philosophy may well differ from my rigid PI and POCO approach. It is certainly desirable that the database tables and domains are as close as possible, but you don’t always have this luxury with large legacy databases. If you start off with a completely empty database with no tables then EF and the code-first approach – where the data tables will be created for you based on your self-written object context implementation – can simplify the repository development process. However, keep in mind that database tables are a lot more rigid and resistant to change than your loosely coupled and layered code. Once the database has been put into production and you want to change the structure of a domain object you may run into difficulties as the corresponding database table may not be modified that easily.

In the next post we’ll discuss the application services layer.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 5: the concrete Repository

Introduction

In the previous post we laid the foundation for our repository layer. It’s now time to see how those elements can be implemented. The implemented data access layer will be a simple in-memory data storage solution. I could have selected something more technical, like EntityFramework but giving a detailed account on a data storage mechanism is not the target of these series. It would probably only sidetrack us too much. The things we take up in this post should suffice for you to implement an EF-based concrete repository.

This is quite a large topic and it’s easy to get lost so this post will be devoted to laying the foundation of domain-specific repositories. In the next post we’ll implement the Customer domain repository.

The Customer repository

We need to expose the operations that the consumer of the code is allowed to perform on the Customer object. Recall from the previous post that we have 2 interfaces: one for read-only entities and another for the ones where we allow all CRUD operations. We want to be able to insert, select, modify and delete Customer objects. The domain-specific repository interfaces must be declared in the Domain layer.

This point is important to keep in mind: it is only the data access abstraction we define in the Domain layer. It is up to the implementing data access mechanism to hide the implementation details. We do this in order to keep the Domain objects completely free of persistence logic so that they remain persistence-ignorant. You can persist the domains in the repository layer the way you want, it doesn’t matter, as long as those details do not bubble up to the other layers in any way, shape or form. Once you start referencing e.g. the DB object context in the web layer you have committed to use EntityFramework and a switch to another technology will be all the more difficult.

Insert the following interface in the Domain/Customer folder:

public interface ICustomerRepository : IRepository<Customer, int>
{
}

Right now it’s an empty interface as we don’t want to declare any new data access capability over and above the methods already defined in the IRepository interface. Recall that we included the FindAll() method in the IReadOnlyRepository interface which allows us to retrieve all entities from the data store. This may be dangerous to perform on an entity where we have millions of rows in the database. Therefore you may want to remove that method from the interface. However, if we only have a few customers then it may be OK and you could have the following customer repository:

public interface ICustomerRepository : IRepository<Customer, int>
{
	IEnumerable<Customer> FindAll();
}

The Repository layer

Insert a new C# class library called DDDSkeletonNET.Portal.Repository.Memory. Add a reference to the Infrastructure layer as we’ll need access to the abstractions defined there. We’ll first need to implement the Unit of Work. Add the following stub implementation to the Repository layer:

public class InMemoryUnitOfWork : IUnitOfWork
{
	public void RegisterUpdate(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
	{}

	public void RegisterInsertion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
	{}

	public void RegisterDeletion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
	{}

	public void Commit()
	{}
}

Let’s fill this in. As we don’t have any built-in solution to track the changes to our entities we’ll have to store the changes ourselves in in-memory collections. Add the following private fields to the class:

private Dictionary<IAggregateRoot, IUnitOfWorkRepository> _insertedAggregates;
private Dictionary<IAggregateRoot, IUnitOfWorkRepository> _updatedAggregates;
private Dictionary<IAggregateRoot, IUnitOfWorkRepository> _deletedAggregates;

We keep track of the changes in these dictionaries. Recall the function of the unit of repository: it will perform the actual data persistence.

We’ll initialise these objects in the unit of work constructor:

public InMemoryUnitOfWork()
{
	_insertedAggregates = new Dictionary<IAggregateRoot, IUnitOfWorkRepository>();
	_updatedAggregates = new Dictionary<IAggregateRoot, IUnitOfWorkRepository>();
	_deletedAggregates = new Dictionary<IAggregateRoot, IUnitOfWorkRepository>();
}

Next we implement the registration methods which in practice only means that we’re filling up these dictionaries:

public void RegisterUpdate(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
{
	if (!_updatedAggregates.ContainsKey(aggregateRoot))
	{
		_updatedAggregates.Add(aggregateRoot, repository);
	}
}

public void RegisterInsertion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
{
	if (!_insertedAggregates.ContainsKey(aggregateRoot))
	{
		_insertedAggregates.Add(aggregateRoot, repository);
	}
}

public void RegisterDeletion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
{
	if (!_deletedAggregates.ContainsKey(aggregateRoot))
	{
		_deletedAggregates.Add(aggregateRoot, repository);
	}
}

We only want to add those changes that haven’t been added before hence the ContainsKey guard clause.

In the commit method we ask the unit of work repository to persist those changes:

public void Commit()
{
	foreach (IAggregateRoot aggregateRoot in _insertedAggregates.Keys)
	{
		_insertedAggregates[aggregateRoot].PersistInsertion(aggregateRoot);
	}

	foreach (IAggregateRoot aggregateRoot in _updatedAggregates.Keys)
	{
		_updatedAggregates[aggregateRoot].PersistUpdate(aggregateRoot);
	}

	foreach (IAggregateRoot aggregateRoot in _deletedAggregates.Keys)
	{
		_deletedAggregates[aggregateRoot].PersistDeletion(aggregateRoot);
	}
}

At present we don’t care about transactions and rollbacks as a fully optimised implementation is not the main goal here. We can however extend the Commit method to accommodate transactions:

public void Commit()
{
    using (TransactionScope scope = new TransactionScope())
    {
         //foreach loops...
         scope.Complete();
    }
}

You’ll need to import the System.Transactions library for this to work.

So what does a unit of work repository implementation look like? We’ll define it as a base abstract class that each concrete repository must derive from. We’ll build up an example step by step. Add the following stub to the Repository layer:

public abstract class Repository<DomainType, IdType, DatabaseType> : IUnitOfWorkRepository where DomainType :  IAggregateRoot
{
	private readonly IUnitOfWork _unitOfWork;

	public Repository(IUnitOfWork unitOfWork)
	{
		if (unitOfWork == null) throw new ArgumentNullException("Unit of work");
		_unitOfWork = unitOfWork;
	}

	public void PersistInsertion(IAggregateRoot aggregateRoot)
	{
		
	}

	public void PersistUpdate(IAggregateRoot aggregateRoot)
	{
			
	}

	public void PersistDeletion(IAggregateRoot aggregateRoot)
	{
		
	}
}

There’s not much functionality in here yet, but it’s an important first step. The Repository abstract class implements IUnitOfWorkRepository so it will be the persistence workhorse of the data storage. It has three type parameters: DomainType, IdType and DatabaseType.

TypeId probably sounds familiar by now so I won’t dive into more details now. Then we distinguish between the domain type and the database type. The domain type will be the type of the domain class such as the Customer domain we’ve created. The database type will be the database representation of the same domain.

Why might we need a database type? It is not guaranteed that a domain object will have the same structure in the storage as in the domain layer. In the domain layer you have the freedom of adding, modifying and removing properties, rules etc. You may not have the same freedom in a relational database. If a data table is used by stored procedures and user-defined functions then you cannot just remove columns at will without breaking a lot of other DB logic. With the advent of NoSql databases, such as MongoDb or RavenDb, which allow a very loose and flexible data structure this requirement may change but at the time of writing this post relational databases are still the first choice for data storage. Usually as soon as a database is put into production and LIVE data starts to fill up the data tables with potentially hundreds of thousands of rows every day then the data table structure becomes a lot more rigid than your domain classes. Hence it can be a good idea to isolate the “domain” and “database” representations of a domain object. It is of course only the concrete repository layer that should know how a domain object is represented in the data storage.

Before we continue with the Repository class let’s simulate the database representation of the Customer class. This can be likened to the objects that the EF or Linq to SQL automation tools generate for you. Insert a new folder called Database in the Repository.Memory layer. Insert the following class into it:

public class DatabaseCustomer
{
	public int Id { get; set; }
	public string CustomerName { get; set; }
	public string Address { get; set; }
	public string Country { get; set; }
	public string City { get; set; }
	public string Telephone { get; set; }
}

This example shows an advantage with having separate domain and database representations. Recall that the Customer domain has an Address value object. In this example we imagine that there’s no separate Address table, we didn’t think of that when we designed the original database. As the database has been in use for some time it’s probably even too late to create a separate Address table so that we don’t interrupt the LIVE system.

However, we don’t care because we’ve allowed for this possibility by the type parameters. We’re free to construct and convert between domain and database objects in the concrete repository layer.

The next object we’ll need in the database is an imitation of the DB object context in EF and Linq to SQL. I realise that I may be talking too much about these specific ORM technologies but it probably suits the vast majority of data-driven .NET projects out there. Insert the following class into the Database folder:

public class InMemoryDatabaseObjectContext
{
	//simulation of database collections
	public List<DatabaseCustomer> DatabaseCustomers { get; set; }

	public InMemoryDatabaseObjectContext()
	{
		InitialiseDatabaseCustomers();
	}

	public void AddEntity<T>(T databaseEntity)
	{
		if (databaseEntity is DatabaseCustomer)
		{
                        DatabaseCustomer databaseCustomer = databaseEntity as DatabaseCustomer;
			databaseCustomer.Id = DatabaseCustomers.Count + 1;
			DatabaseCustomers.Add(databaseEntity as DatabaseCustomer);
		}
	}

	public void UpdateEntity<T>(T databaseEntity)
	{
		if (databaseEntity is DatabaseCustomer)
		{
			DatabaseCustomer dbCustomer = databaseEntity as DatabaseCustomer;
			DatabaseCustomer dbCustomerToBeUpdated = (from c in DatabaseCustomers where c.Id == dbCustomer.Id select c).FirstOrDefault();
			dbCustomerToBeUpdated.Address = dbCustomer.Address;
			dbCustomerToBeUpdated.City = dbCustomer.City;
			dbCustomerToBeUpdated.Country = dbCustomer.Country;
			dbCustomerToBeUpdated.CustomerName = dbCustomer.CustomerName;
			dbCustomerToBeUpdated.Telephone = dbCustomer.Telephone;
		}
	}

	public void DeleteEntity<T>(T databaseEntity)
	{
		if (databaseEntity is DatabaseCustomer)
		{
			DatabaseCustomer dbCustomer = databaseEntity as DatabaseCustomer;
			DatabaseCustomer dbCustomerToBeDeleted = (from c in DatabaseCustomers where c.Id == dbCustomer.Id select c).FirstOrDefault();
			DatabaseCustomers.Remove(dbCustomerToBeDeleted);
		}
	}

	private void InitialiseDatabaseCustomers()
	{
		DatabaseCustomers = new List<DatabaseCustomer>();
		DatabaseCustomers.Add(new DatabaseCustomer(){Address = "Main street", City = "Birmingham", Country = "UK", CustomerName ="GreatCustomer", Id = 1, Telephone = "N/A"});
		DatabaseCustomers.Add(new DatabaseCustomer() { Address = "Strandvägen", City = "Stockholm", Country = "Sweden", CustomerName = "BadCustomer", Id = 2, Telephone = "123345456" });
		DatabaseCustomers.Add(new DatabaseCustomer() { Address = "Kis utca", City = "Budapest", Country = "Hungary", CustomerName = "FavouriteCustomer", Id = 3, Telephone = "987654312" });
	}
}

First we have a collection of DB customer objects. In the constructor we initialise the collection values so that we don’t start with an empty one. Then we add insertion, update and delete methods for database type T to make it generic. These are somewhat analogous to the InsertOnSubmit, SubmitChanges and the DeleteOnSubmit methods from the Linq to SQL object context. The implementation itself is not too clever of course as we need to check the type but it’s OK for now. Perfection is not the goal in this part of the code, the automation tools will prepare a lot more professional code for you from the database. In the AddEntity we assign an ID based on the number of elements in the database. This is a simulation of the ID auto-assign feature of relational databases.

We’ll need access to this InMemoryDatabaseObjectContext in the repository layer. In a real system this object will be used intensively therefore access to it should be regulated. Creating a new object context for every single DB operation is not too clever so we’ll hide the access behind a thread-safe lazy singleton class. If you don’t know what these term mean, check out my post on the singleton design pattern. I won’t repeat the stuff that’s written there.

Insert the following abstract factory interface in the Database folder:

public interface IObjectContextFactory
{
	InMemoryDatabaseObjectContext Create();
}

The interface will be implemented by the following class:

public class LazySingletonObjectContextFactory : IObjectContextFactory
{
	public InMemoryDatabaseObjectContext Create()
	{
		return InMemoryDatabaseObjectContext.Instance;
	}
}

The InMemoryDatabaseObjectContext object does not have any Instance property yet, so add the following code to it:

public static InMemoryDatabaseObjectContext Instance
{
	get
	{
		return Nested.instance;
	}
}

private class Nested
{
	static Nested()
	{
	}
	internal static readonly InMemoryDatabaseObjectContext instance = new InMemoryDatabaseObjectContext();
}

If you don’t understand what this code does then make sure you read through the post on the singleton pattern.

We’ll need a reference to the abstract factory in the Repository object, so modify the private field declarations and the constructor as follows:

private readonly IUnitOfWork _unitOfWork;
private readonly IObjectContextFactory _objectContextFactory;

public Repository(IUnitOfWork unitOfWork, IObjectContextFactory objectContextFactory)
{
	if (unitOfWork == null) throw new ArgumentNullException("Unit of work");
	if (objectContextFactory == null) throw new ArgumentNullException("Object context factory");
	_unitOfWork = unitOfWork;
	_objectContextFactory = objectContextFactory;
}

At this point it’s also clear that we’ll need to be able to convert a domain type to a database type. This is best implemented in the concrete repository classes so it’s enough to add an abstract method in the Repository class:

public abstract DatabaseType ConvertToDatabaseType(DomainType domainType);

We can implement the Persist methods as follows:

public void PersistInsertion(IAggregateRoot aggregateRoot)
{
	DatabaseType databaseType = RetrieveDatabaseTypeFrom(aggregateRoot);
	_objectContextFactory.Create().AddEntity<DatabaseType>(databaseType);
}

public void PersistUpdate(IAggregateRoot aggregateRoot)
{
	DatabaseType databaseType = RetrieveDatabaseTypeFrom(aggregateRoot);
	_objectContextFactory.Create().UpdateEntity<DatabaseType>(databaseType);
}

public void PersistDeletion(IAggregateRoot aggregateRoot)
{
	DatabaseType databaseType = RetrieveDatabaseTypeFrom(aggregateRoot);
	_objectContextFactory.Create().DeleteEntity<DatabaseType>(databaseType);
}

private DatabaseType RetrieveDatabaseTypeFrom(IAggregateRoot aggregateRoot)
{
	DomainType domainType = (DomainType)aggregateRoot;
	DatabaseType databaseType = ConvertToDatabaseType(domainType);
	return databaseType;
}

We need to convert the incoming IAggregateRoot object to the database type as it is the database type that the DB understands. The DB type is its own representation of the domain so we need to talk to it that way.

So these are the persistence methods but we need to register the changes first. Add the following methods to Repository.cs:

public void Update(DomainType aggregate)
{
	_unitOfWork.RegisterUpdate(aggregate, this);
}

public void Insert(DomainType aggregate)
{
	_unitOfWork.RegisterInsertion(aggregate, this);
}

public void Delete(DomainType aggregate)
{
	_unitOfWork.RegisterDeletion(aggregate, this);
}

These operations are so common and repetitive that we can put them in this abstract base class instead of letting the domain-specific repositories implement them over and over again. All they do is adding the operations to the queue of the unit of work to be performed when Commit() is called. Upon Commit() the unit of work repository, i.e. the abstract Repository object will persist the changes using the in memory object context.

This model is relatively straightforward to change:

  • If you need a plain file-based storage mechanism then you might create a FileObjectContext class where you read to and from a file.
  • For EntityFramework and Linq to SQL you’ll use the built-in object context classes to keep track of and persist the changes
  • File-based NoSql solutions generally also have drivers for .NET – they can be used to implement a NoSql solution

So the possibilities are endless in fact. You can hide the implementation behind abstractions such as the IUnitOfWork interface or the Repository abstract class. It may well be that you need to add methods to the Repository class depending on the data storage technology you use but that’s perfectly acceptable. The concrete repository layer is…, well, concrete. You can dedicate it to a specific technology, just like the one we’re building here is dedicated to an in-memory storage mechanism. You can have several different implementations of the IUnitOfWork and IUnitOfWorkRepository interfaces and test them before you let your application go public. You can even mix and match the implementations:

  • Register the changes in a temporary file and commit them in NoSql
  • Register the changes in memory and commit them to the cache
  • Register the changes in cache and commit them to SQL Azure

Of course these combinations are very exotic but it shows you the flexibility behind all these abstractions. Don’t assume that the Repository class is a solution for ALL types of concrete unit of work. You’ll certainly have to modify it depending on the concrete data storage mechanism you work with. However, note the following points:

  • The rest of the application will not be concerned with the concrete data access implementation
  • The implementation details are well hidden behind this layer without them bubbling up to and permeating the other layers

So as long as the concrete implementations are hidden in the data access layer you’ll be fine.

There’s one last abstract method we’ll add to Repository.cs is the ubiquitous find-by-id method which we’ll delegate to the implementing classes:

public abstract DomainType FindBy(IdType id);

We’ll implement the CustomerRepository class in the next post.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 4: the abstract Repository

Introduction

In the previous post we created a very basic domain layer with the first domain object: Customer. We’ll now see what the data access layer could look like in an abstract form. This is where we must be careful not to commit the same mistakes as in the technology-driven example of the introductory post of this series.

We must abstract away the implementation details of the data access technology we use so that we can easily switch strategies later if necessary. We cannot let any technology-specific implementation bubble up from the data access layer. These details include the object context in EntityFramework, MongoClient and MongoServer in MongoDb .NET, the objects related to the file system in a purely file based data access solution etc., you probably get the idea. We must therefore make sure that no other layer will be dependent on the concrete data access solution.

We’ll first lay the foundations for abstracting away any type of data access technology and you may find it a “heavy” process with a steep learning curve. As these technologies come in many different shapes this is not the most trivial task to achieve. We can have ORM technologies like EntityFramework, file-based data storage like MongoDb, key-value style storage such as Azure storage, and it’s not easy to find a common abstraction that fits all of them. We’ll need to accommodate these technologies without hurting the ISP principle too much. We’ll follow a couple of well-established patterns and see how they can be implemented.

Aggregate root

We discussed aggregates and aggregate roots in this post. Recall that aggregates are handled as one unit where the aggregate root is the entry point, the “boss” of the aggregate. This implies that the data access layer should only handle aggregate roots. It should not accept objects that lie somewhere within the aggregate. We haven’t yet implemented any code regarding aggregate roots, but we can take a very simple approach. Insert the following empty interface in the Infrastructure.Common/Domain folder:

public interface IAggregateRoot
{
}

Conceptually it would probably be better to create a base class for aggregate roots, but we already have one for entities. As you know an object cannot derive from two base classes so we’ll indicate aggregate roots with this interface instead. At present it is simply an indicator interface with no methods, such as the ISerializable interface in .NET. We could add common properties or methods here but I cannot think of any right now.

After consulting the domain expert we decide that the Customer domain is an aggregate root: the root of its own Customer aggregate. Right now the Customer aggregate has only one member, the Customer entity, but that’s perfectly acceptable. Aggregate roots don’t necessarily consist of at least 2 objects. Let’s change the Customer domain object declaration as follows:

public class Customer : EntityBase<int>, IAggregateRoot

The rest of the post will look at how to abstract away the data access technology and some patterns related to that.

Unit of work

The first important concept within data access and persistence is the Unit of Work. The purpose of a Unit of Work is to maintain a list of objects that have been modified by a transaction. These modifications include insertions, updates and deletions. The Unit of Work co-ordinates the persistence of these changes and also checks for concurrency problems, such as the same object being modified by different threads.

This may sound a bit cryptic but if you’re familiar with EntityFramework or Linq to SQL then the object context in each technology is a good example for an implementation of a unit of work:

DbContext.AddObject("Customers", new Customer());
DbContext.SaveChanges();
DbContext.Customers.InsertOnSubmit(new Customer());
DbContext.SubmitChanges();

In both cases the DbContext, which takes the role of the Unit of Work, first registers the changes. The changes are not persisted until the SaveChanges() – EntityFramework – or the SubmitChanges() – Linq to SQL – method is called. Note that the DB object context is responsible for registering AND persisting the changes.

The Unit of Work pattern can be represented by the following interface:

public interface IUnitOfWork
{
	void RegisterUpdate(IAggregateRoot aggregateRoot);
	void RegisterInsertion(IAggregateRoot aggregateRoot);
	void RegisterDeletion(IAggregateRoot aggregateRoot);
	void Commit();
}

Notice that we have separated the registration and commit actions. In the case of EntityFramework and similar ORM technologies the object that registers and persists the changes will often be the same: the object context or a similar object.

However, this is not always the case. It is perfectly reasonable that you are not fond of these automation tools and want to use something more basic where you have the freedom of specifying how you track changes and how you persist them: you may register the changes in memory and persist them in a file. Or you may still have a lot of legacy ADO.NET where you want to move to a more modern layered architecture. ADO.NET lacks a DbContext object so you may have to solve the registration of changes in a different way.

For those scenarios we need to introduce another abstraction. Insert the following interface in a new folder called UnitOfWork in the Infrastructure layer:

public interface IUnitOfWorkRepository
{
	void PersistInsertion(IAggregateRoot aggregateRoot);
	void PersistUpdate(IAggregateRoot aggregateRoot);
	void PersistDeletion(IAggregateRoot aggregateRoot);
}

Insert a new interface called IUnitOfWork in the same folder:

public interface IUnitOfWork
{
	void RegisterUpdate(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository);
	void RegisterInsertion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository);
	void RegisterDeletion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository);
	void Commit();
}

The above Unit of Work pattern abstraction can be extended as above to give way for the total separation between the registration and persistence of aggregate roots. Later on, as we see these elements at work and as you get more acquainted with the structure of the solution, you may decide that this is overkill and you want to go with the more simple solution, it’s up to you.

Repository pattern

The repository pattern is used to declare the possible data access related actions you want to expose for your aggregate roots. The most basic of those are CRUD operations: insert, update, delete, select. You can have other methods such as insert many, select by ID, select by id and date etc., but the CRUD operations are probably the most common across all aggregate roots.

It may be beneficial to divide those operations into two groups: read and write operations. You may have a read-only policy for some aggregate roots so you don’t need to expose all of those operations. For read-only aggregates you can have a read-only repository. Insert the following interface in the Domain folder of the Infrastructure project:

public interface IReadOnlyRepository<AggregateType, IdType> where AggregateType : IAggregateRoot
{
	AggregateType FindBy(IdType id);
	IEnumerable<AggregateType> FindAll();
}

Here again we only allow aggregate roots to be selected. Finding an object by its ID is such a basic operation that it just has to be included here. FindAll can be a bit more sensitive issue. If you have a data table with millions of rows then you may not want to expose this method as it will bog down your data server, so use it with care. It’s perfectly OK to omit this operation from this interface. You will be able to include this method in domain-specific repositories as we’ll see in the next post. E.g. if you want to enable the retrieval of all Customers from the database then you can expose this method in the CustomerRepository class.

The other data access operations can be exposed in an “actionable” interface. Insert the following interface in the folder of the Infrastructure project:

public interface IRepository<AggregateType, IdType> 
		: IReadOnlyRepository<AggregateType, IdType> where AggregateType 
		: IAggregateRoot
{
	void Update(AggregateType aggregate);
	void Insert(AggregateType aggregate);
	void Delete(AggregateType aggregate);
}

Keep in mind that these interfaces only expose the most common actions that are applicable across all aggregate roots. You’ll be able to specify domain-specific actions in domain-specific implementations of these repositories.

We’ll look at how these elements can be implemented for the Customer object in the next post.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 3: the Domain

Introduction

In the previous post we laid the theoretical foundation for our domains. Now it’s finally time to see some code. In this post we’ll concentrate on the Domain layer and we’ll also start building the Infrastructure layer.

Infrastructure?

Often the word infrastructure is taken to mean the storage mechanism, like an SQL database, or the physical layers of a system, such as the servers. However, in this case we mean something different. The infrastructure layer is a place for all sorts of cross-cutting concerns and objects that can be used in any project. They are not specific to any single project or domain. Examples: logging, file operations, security, caching, helper classes for Date and String operations etc. Putting these in a separate infrastructure layer helps if you want to employ the same logging, caching etc. policy across all your projects. You could put these within the project solution but then when you start your next project then you may need to copy and paste a lot of code.

Infrastructure

The Entities we discussed in the previous post will all derive from an abstract EntityBase class. We could put it directly into the domain layer. However, think of this class as the base for all entities across all your DDD projects where you put all common functionality for your domains.

Create a new blank solution in VS and call it DDDSkeletonNET.Portal. Add a new C# class library called DDDSkeletonNET.Infrastructure.Common. Remove Class1 and add a new folder called Domain. In that folder add a new class called EntityBase:

public abstract class EntityBase<IdType>
{
	public IdType Id { get; set; }

	public override bool Equals(object entity)
	{
		return entity != null
		   && entity is EntityBase<IdType>
		   && this == (EntityBase<IdType>)entity;
	}

	public override int GetHashCode()
	{
		return this.Id.GetHashCode();
	}

	public static bool operator ==(EntityBase<IdType> entity1, EntityBase<IdType> entity2)
	{
		if ((object)entity1 == null && (object)entity2 == null)
		{
			return true;
		}

		if ((object)entity1 == null || (object)entity2 == null)
		{
			return false;
		}

		if (entity1.Id.ToString() == entity2.Id.ToString())
		{
			return true;
		}

		return false;
	}

	public static bool operator !=(EntityBase<IdType> entity1, EntityBase<IdType> entity2)
	{
		return (!(entity1 == entity2));
	}
}

We only have one property at this point, the ID, whose type can be specified through the IdType type parameter. Often this will be an integer or maybe a GUID or even some auto-generated string. The rest of the code takes care of comparison issues so that you can compare two entities with the ‘==’ operator or the Equals method. Recall that entityA == entityB if an only if their IDs are identical, hence the comparison being based on the ID property.

Domains need to validate themselves when insertions or updates are executed so let’s add the following abstract method to EntityBase.cs:

protected abstract void Validate();

We can describe our business rules in many ways but the simplest format is a description in words. Add a class called BusinessRule to the Domain folder:

public class BusinessRule
{
	private string _ruleDescription;

	public BusinessRule(string ruleDescription)
	{
		_ruleDescription = ruleDescription;
	}

	public String RuleDescription
	{
		get
		{
			return _ruleDescription;
		}
	}
}

Coming back to EntityBase.cs we’ll store the list of broken business rules in a private variable:

private List<BusinessRule> _brokenRules = new List<BusinessRule>();

This list represents all the business rules that haven’t been adhered to during the object composition: the total price is incorrect, the customer name is empty, the person’s age is negative etc., so it’s all the things that make the state of the object invalid. We don’t want to save objects in an invalid state in the data storage, so we definitely need validation.

Implementing entities will be able to add to this list through the following method:

protected void AddBrokenRule(BusinessRule businessRule)
{
	_brokenRules.Add(businessRule);
}

External code will collect all broken rules by calling this method:

public IEnumerable<BusinessRule> GetBrokenRules()
{
	_brokenRules.Clear();
	Validate();
	return _brokenRules;
}

We first clear the list so that we don’t return any previously stored broken rules. They may have been fixed by then. We then run the Validate method which is implemented in the concrete domain classes. The domain will fill up the list of broken rules in that implementation. The list is then returned. We’ll see in a later post on the application service layer how this method can be used from the outside.

We’re now ready to implement the first domain object. Add a new C# class library called DDDSkeleton.Portal.Domain to the solution. Let’s make this easy for us and create the most basic Customer domain. Add a new folder called Customer and in it a class called Customer which will derive from EntityBase.cs. The Domain project will need to reference the Infrastructure project. Let’s say the Customer will have an id of type integer. At first the class will look as follows:

public class Customer : EntityBase<int>
{
	protected override void Validate()
	{
		throw new NotImplementedException();
	}
}

We know from the domain expert that every Customer will have a name, so we add the following property:

public string Name { get; set; }

Our first business rule says that the customer name cannot be null or empty. We’ll store these rule descriptions in a separate file within the Customer folder:

public static class CustomerBusinessRule
{
	public static readonly BusinessRule CustomerNameRequired = new BusinessRule("A customer must have a name.");
}

We can now implement the Validate() method in the Customer domain:

protected override void Validate()
{
	if (string.IsNullOrEmpty(Name))
	{
		AddBrokenRule(CustomerBusinessRule.CustomerNameRequired);
	}
}

Let’s now see how value objects can be used in code. The domain expert says that every customer will have an address property. We decide that we don’t need to track Addresses the same way as Customers, i.e. we don’t need to set an ID on them. We’ll need a base class for value objects in the Domain folder of the Infrastructure layer:

public abstract class ValueObjectBase
{
	private List<BusinessRule> _brokenRules = new List<BusinessRule>();

	public ValueObjectBase()
	{
	}

	protected abstract void Validate();

	public void ThrowExceptionIfInvalid()
	{
		_brokenRules.Clear();
		Validate();
		if (_brokenRules.Count() > 0)
		{
			StringBuilder issues = new StringBuilder();
			foreach (BusinessRule businessRule in _brokenRules)
			{
				issues.AppendLine(businessRule.RuleDescription);
			}

			throw new ValueObjectIsInvalidException(issues.ToString());
		}
	}

	protected void AddBrokenRule(BusinessRule businessRule)
	{
		_brokenRules.Add(businessRule);
	}
}

…where ValueObjectIsInvalidException looks as follows:

public class ValueObjectIsInvalidException : Exception
{
     public ValueObjectIsInvalidException(string message)
            : base(message)
        {}
}

You’ll recognise the Validate and AddBrokenRule methods. Value objects can of course also have business rules that need to be enforced. In the Domain layer add a new folder called ValueObjects. Add a class called Address in that folder:

public class Address : ValueObjectBase
{
	protected override void Validate()
	{
		throw new NotImplementedException();
	}
}

Add the following properties to the class:

public string AddressLine1 { get; set; }
public string AddressLine2 { get; set; }
public string City { get; set; }
public string PostalCode { get; set; }

The domain expert says that every Address object must have a valid City property. We can follow the same structure we took above. Add the following class to the ValueObjects folder:

public static class ValueObjectBusinessRule
{
	public static readonly BusinessRule CityInAddressRequired = new BusinessRule("An address must have a city.");
}

We can now complete the Validate method of the Address object:

protected override void Validate()
{
	if (string.IsNullOrEmpty(City))
	{
		AddBrokenRule(ValueObjectBusinessRule.CityInAddressRequired);
	}
}

Let’s add this new Address property to Customer:

public Address CustomerAddress { get; set; }

We’ll include the value object validation in the Customer validation:

protected override void Validate()
{
	if (string.IsNullOrEmpty(Name))
	{
		AddBrokenRule(CustomerBusinessRule.CustomerNameRequired);
	}
	CustomerAddress.ThrowExceptionIfInvalid();
}

As the Address value object is a property of the Customer entity it’s practical to include its validation within this Validate method. The Customer object doesn’t need to concern itself with the exact rules of an Address object. Customer effectively tells Address to go and validate itself.

We’ll stop building our domain layer now. We could add several more domain objects but let’s keep things as simple as possible so that you will be able to view the whole system without having to track too many threads. We now have an example of an entity, a value object and a couple of basic validation rules. We have missed how aggregate roots play a role in code but we’ll come back to that in the next post. You probably want to see how the repository, service and UI layers can be wired together.

We’ll start building the repository layer in the next post.

View the list of posts on Architecture and Patterns here.

ultimatemindsettoday

A great WordPress.com site

Elliot Balynn's Blog

A directory of wonderful thoughts

HarsH ReaLiTy

A Good Blog is Hard to Find

Softwarearchitektur in der Praxis

Wissenswertes zu Webentwicklung, Domain-Driven Design und Microservices

Technology Talks

on Microsoft technologies, Web, Android and others

Software Engineering

Web development

Disparate Opinions

Various tidbits

chsakell's Blog

WEB APPLICATION DEVELOPMENT TUTORIALS WITH OPEN-SOURCE PROJECTS

Once Upon a Camayoc

Bite-size insight on Cyber Security for the not too technical.

Guru N Guns's

OneSolution To dOTnET.

Johnny Zraiby

Measuring programming progress by lines of code is like measuring aircraft building progress by weight.

%d bloggers like this: