Design patterns and practices in .NET: the Composite pattern

Introduction

The Composite pattern deals with putting individual objects together to form a whole. In mathematics the relationship between the objects and the composite object they build can be described by a part-whole hierarchy. The ingredient objects are the parts and the composite is the whole.

In essence we build up a tree – a composite – that consists of one or more children – the leaves. The client calling upon the composite should be able to treat the individual parts of the whole in a uniform way.

A real life example is sending emails. If you want to send an email to all developers in your organisation one option is that you type in the names of each developer in the ‘to’ field. This is of course not efficient. Fortunately we can construct recipient groups, such as Developers. If you then also want to send the email to another person outside the Developers group you can simply put their name in the ‘to’ box along with Developers. We treat both the group and the individual emails in a uniform way. We can insert both groups and individual emails in the ‘to’ box. We rely on the email engine to take the group apart and send the email to each recipient in that group. We don’t really care how it’s done – apart from a couple network geeks I guess.

Demo

We will first build a demo application that does not use the pattern and then we’ll refactor it. We’ll simulate a game where play money is split among the players in a group if they manage to kill a monster.

Start up Visual Studio and create a new console application. Insert a new class called Player:

public class Player
	{
		public string Name { get; set; }
		public int Gold { get; set; }

		public void Stats()
		{
			Console.WriteLine("{0} has {1} coins.", Name, Gold);
		}
	}

This is easy to follow I believe. A group of players is represented by the Group class:

public class Group
	{
		public string Name { get; set; }
		public List<Player> Members { get; set; }

		public Group()
		{
			Members = new List<Player>();
		}
	}

The money splitting mechanism is run in the Main method as follows:

static void Main(string[] args)
{
	int goldForKill = 1023;
	Console.WriteLine("You have killed the Monster and gained {0} coins!", goldForKill);

	Player andy = new Player { Name = "Andy" };
	Player jane = new Player { Name = "Jane" };
	Player eve = new Player { Name = "Eve" };
	Player ann = new Player { Name = "Ann" };
	Player edith = new Player { Name = "Edith" };
	Group developers = new Group { Name = "Developers", Members = { andy, jane, eve } };

	List<Player> individuals = new List<Player> { ann, edith };
	List<Group> groups = new List<Group> { developers };

	double totalToSplitBy = individuals.Count + groups.Count;
	double amountForEach = goldForKill / totalToSplitBy;
	int leftOver = goldForKill % totalToSplitBy;

	foreach (Player individual in individuals)
	{
		individual.Gold += amountForEach + leftOver;
		leftOver = 0;
		individual.Stats();
	}

	foreach (Group group in groups)
	{
		double amountForEachGroupMember = amountForEach / group.Members.Count;
		int leftOverForGroup = amountForEachGroupMember % group.Members.Count;
		foreach (Player member in group.Members)
		{
			member.Gold += amountForEachGroupMember + leftOverForGroup;
			leftOverForGroup = 0;
			member.Stats();
		}
	}

	Console.ReadKey();
}

So our brilliant game starts off where the monster was killed and we’re ready to hand out the reward among the players. We have 5 players. Three of them make up a group and the other two make up a list of individual players. We’re then ready to split the gold among the participants where the group is counted as one unit i.e. we have 3 elements: the two individual players and the Developer group. Then we go through each individual and give them their share. We do the same to each group as well where we also divide the group’s share among the individuals within that group.

Build and run the application and you’ll see in the console that the 1023 pieces of gold was divided up. The code works but it’s definitely quite messy. Keep in mind that our tree hierarchy is very simple: we can have individuals and groups. Think of a more complicated scenario: within the Developers group we can have subgroups, such as .NET developers, Java developers who are further subdivided into web and desktop developers and even individuals that do not fit into any group. In the code we iterate through the individuals and the groups manually. We also iterate the players in the group. Imagine that we’d have to iterate through the subgroups of the subgroups of the group if we are facing a deeper hierarchy. The foreach loop would keep growing and the splitting logic would become very challenging to maintain.

So let’s refactor the code. The composite pattern states that the client should be able to treat the individual part and the whole in a uniform way. Thus the first step is to make the Person and the Group class uniform in some way. As it turns out the logical way to do this is that both classes implement an interface that the client can communicate with. So the client won’t deal with groups and individuals but with a uniform object, such as Participant.

Insert an interface called IParticipant:

public interface IParticipant
{
	int Gold { get; set; }
	void Stats();
}

Every participant of the game will have some gold and will be able to write out the current statistics regardless of them being individuals or groups. We let Player and Group implement the interface:

public class Player : IParticipant
	{
		public string Name { get; set; }
		public int Gold { get; set; }

		public void Stats()
		{
			Console.WriteLine("{0} has {1} coins.", Name, Gold);
		}
	}

The Player class implements the interface without changes in its body.

The Group class will encapsulate the gold sharing logic we saw in the Main method above:

public class Group : IParticipant
	{
		public string Name { get; set; }
		public List<IParticipant> Members { get; set; }

		public Group()
		{
			Members = new List<IParticipant>();
		}

		public int Gold
		{
			get
			{
				int totalGold = 0;
				foreach (IParticipant member in Members)
				{
					totalGold += member.Gold;
				}

				return totalGold;
			}
			set
			{
				double eachSplit = value / Members.Count;
				int leftOver = value % Members.Count;
				foreach (IParticipant member in Members)
				{
					member.Gold += eachSplit + leftOver;
					leftOver = 0;
				}
			}
		}

		public void Stats()
		{
			foreach (IParticipant member in Members)
			{
				member.Stats();
			}
		}
	}

In the Gold property getter we simply loop through the group members and add up their amount of gold. In the setter we split up the total amount of gold among the group members. Note also that Group can have a list of IParticipant objects representing either individual players or subgroups. You can imagine that those subgroups in turn can also have subgroups so the setters and getters will automatically collect the information from the nested members as well. The leftover variable is set to 0 as the first member will be given all the left over, we don’t care about such details.

In the Stats method we simply call the statistics of each group member – again group members can be individuals and subgroups. If it’s a subgroup then the Stats method of the members of the subgroup will automatically be called.

The modified Main method looks as follows:

static void Main(string[] args)
{
	int goldForKill = 1023;
	Console.WriteLine("You have killed the Monster and gained {0} coins!", goldForKill);

	IParticipant andy = new Player { Name = "Andy" };
	IParticipant jane = new Player { Name = "Jane" };
	IParticipant eve = new Player { Name = "Eve" };
	IParticipant ann = new Player { Name = "Ann" };
	IParticipant edith = new Player { Name = "Edith" };
	IParticipant oldBob = new Player { Name = "Old Bob" };
	IParticipant newBob = new Player { Name = "New Bob" };
	IParticipant bobs = new Group { Members = { oldBob, newBob } };
	IParticipant developers = new Group { Name = "Developers", Members = { andy, jane, eve, bobs } };

	IParticipant participants = new Group { Members = { developers, ann, edith } };
	participants.Gold += goldForKill;
	participants.Stats();

	Console.ReadKey();
}

You can see that the client, i.e. the Main method calls the methods of IParticipant where IParticipant can be an individual, a group or a group within a group. When we set the level gold through the Gold property the gold distribution logic of each concrete type is called which even takes care of sharing the gold among the groups within a group. The participants variable includes all members of the game.

The main advantage of this pattern is that now the tree structure can be as deep as you can imagine and you don’t have to change the logic within the Player and Group classes. Also, we contain the differences between a leaf and a group in the Player and Group classes separately. In addition, they can also tested independently.

Build and run the project and you should see the amount of gold split among all participants of the game.

View the list of posts on Architecture and Patterns here.

Design patterns and practices in .NET: the Chain of Responsibility pattern

Introduction

The Chain of Responsibility is an ordered chain of message handlers that can process a specific type of message or pass the message to the next handler in the chain. This pattern revolves around messaging between a sender and one more receivers. This probably sounds very cryptic – just like the basic description of design patterns in general – so let’s see a quick example in words.

Suppose we have a Sender that knows the first receiver in the messaging chain, call it Receiver A. Receiver A is in turn aware of Receiver B and so on. When a message comes in to a sender we can only perform one thing: pass it along to the first receiver. Receiver A inspects the message and decides whether it can process it or not. If not then it passes the message along to Receiver B and Receiver B will perform the same message inspection as Receiver A. It then decides to either process the Message or send it further down the messaging chain. If it processes the message then it will send a Response back to the Sender.

In this example the Sender sent a message to the first Receiver and received a Response from a different one. However, the Sender is not aware of Receiver B. If the messaging stops at Receiver B then Receiver C remained completely inactive: it has no knowledge of the Message and that the messaging actually occurred.

The example also showcases the traits of the pattern:

  • The Sender is only aware of the first receiver
  • Each receiver only knows of the next receiver down the messaging chain
  • Receivers can process the Message or send it down the chain
  • The Sender will have no knowledge about which Receiver received the message
  • The first receiver that was able to process the message terminates the chain
  • The order of the receiver list matters

Demo

In the demo we’ll simulate the hierarchy of a company: an employee would like to make a large expenditure so he asks his manager. The manager is not entitled to approve the large sum and sends the request forward to the VP. The VP is not entitled either to approve the request so sends it to the President. The President is the highest authority in the hierarchy who will either approve or disapprove the request and sends the response back to the original employee.

Open Visual Studio and create a blank solution. Insert a class library called Domain. You can remove Class1.cs. We’ll build up the components one by one. We’ll start with the abstraction for an expense report, IExpenseReport:

public interface IExpenseReport
    {
        Decimal Total { get; }
    }

An expense report can thus have a total sum.

The IExpenseApprover interface represents any object that is entitled to approve expense reports:

public interface IExpenseApprover
	{
		ApprovalResponse ApproveExpense(IExpenseReport expenseReport);
	}

…where ApprovalResponse is an enumeration:

public enum ApprovalResponse
	{
		Denied,
		Approved,
		BeyondApprovalLimit,
	}

The concrete implementation of the IExpenseReport is very straightforward:

public class ExpenseReport : IExpenseReport
	{
		public ExpenseReport(Decimal total)
		{
			Total = total;
		}

		public decimal Total
		{
			get;
			private set;
		}
	}

The Employee class implements the IExpenseApprover interface:

public class Employee : IExpenseApprover
	{
		public Employee(string name, Decimal approvalLimit)
		{
			Name = name;
			_approvalLimit = approvalLimit;
		}

		public string Name { get; private set; }

		public ApprovalResponse ApproveExpense(IExpenseReport expenseReport)
		{
			return expenseReport.Total > _approvalLimit
					? ApprovalResponse.BeyondApprovalLimit
					: ApprovalResponse.Approved;
		}

		private readonly Decimal _approvalLimit;
	}

As you see the constructor needs a name and an approval limit. The implemented ApproveExpense method simply checks if the total value of the expense report is above or below the approval limit. If total is lower than the limit, then the expense is approved, otherwise the method indicates that the total is too much for the employee to approve.

Add a Console application called Approval to the solution and add a reference to the Domain library. We’ll first check how the approval process may look like without the pattern applied:

static void Main(string[] args)
		{
			List<Employee> managers = new List<Employee>
                                          {
                                              new Employee("William Worker", Decimal.Zero),
                                              new Employee("Mary Manager", new Decimal(1000)),
                                              new Employee("Victor Vicepres", new Decimal(5000)),
                                              new Employee("Paula President", new Decimal(20000)),
                                          };

			Decimal expenseReportAmount;
			while (ConsoleInput.TryReadDecimal("Expense report amount:", out expenseReportAmount))
			{
				IExpenseReport expense = new ExpenseReport(expenseReportAmount);

				bool expenseProcessed = false;

				foreach (Employee approver in managers)
				{
					ApprovalResponse response = approver.ApproveExpense(expense);

					if (response != ApprovalResponse.BeyondApprovalLimit)
					{
						Console.WriteLine("The request was {0}.", response);
						expenseProcessed = true;
						break;
					}
				}

				if (!expenseProcessed)
				{
					Console.WriteLine("No one was able to approve your expense.");
				}
			}
		}

…where ConsoleInput is a helper class that looks as follows:

public static class ConsoleInput
	{
		public static bool TryReadDecimal(string prompt, out Decimal value)
		{
			value = default(Decimal);

			while (true)
			{
				Console.WriteLine(prompt);
				string input = Console.ReadLine();

				if (string.IsNullOrEmpty(input))
				{
					return false;
				}

				try
				{
					value = Convert.ToDecimal(input);
					return true;
				}
				catch (FormatException)
				{
					Console.WriteLine("The input is not a valid decimal.");
				}
				catch (OverflowException)
				{
					Console.WriteLine("The input is not a valid decimal.");
				}
			}
		}
	}

What can we say about the Main method? We first set up our employees with the approval limits in increasing order. The next step is to read an expense report from the command line. Using that sum we construct an expense report which is given to every employee in the list. Each employee is asked to approve the limit and we check the outcome. If the expense is approved then we break the foreach loop.

Build and run the application. Enter 5000 in the console and you’ll see that the expense was approved. You’ll recall that the VP had an approval limit of 5000 so it was that employee in the chain to approve. Enter 50000 and you’ll see that nobody was able to approve the expense because it exceeds the limit of every one of them.

What is wrong with this implementation? After all we iterate through the employee list to see if anyone is able to approve the expense. We get our response and we get to know the outcome.

The problem is that the caller is responsible for iterating through the list. This means that the logic of handling expense reports is encapsulated at the wrong level. Imagine that you as an employee should not ask each one of the managers above you for a yes or no answer. You should only have to turn to your boss who in turn will ask his or her boss etc. Our code should reflect this.

In order to achieve that we need to insert a new interface:

public interface IExpenseHandler
	{
		ApprovalResponse Approve(IExpenseReport expenseReport);
		void RegisterNext(IExpenseHandler next);
	}

The Approve method should look familiar from the previous descriptions. The RegisterNext method registers the next approver in the chain. It means that if I cannot approve the expense then I should go and ask the next person in line.

This interface represents a single link in the chain of responsibility.

The IExpenseHandler interface is implemented by the ExpenseHandler class:

public class ExpenseHandler : IExpenseHandler
	{
		private readonly IExpenseApprover _approver;
		private IExpenseHandler _next;

		public ExpenseHandler(IExpenseApprover expenseApprover)
		{
			_approver = expenseApprover;
			_next = EndOfChainExpenseHandler.Instance;
		}

		public ApprovalResponse Approve(IExpenseReport expenseReport)
		{
			ApprovalResponse response = _approver.ApproveExpense(expenseReport);

			if (response == ApprovalResponse.BeyondApprovalLimit)
			{
				return _next.Approve(expenseReport);
			}

			return response;
		}

		public void RegisterNext(IExpenseHandler next)
		{
			_next = next;
		}
	}

This class will need an IExpenseApprover in its constructor. This approver is an Employee just like before. The constructor makes sure that there is always a special end of chain Employee in the approval chain through the EndOfChainExpenseHandler class. The Approve method receives an expense report. We ask the approver if they are able to approver the expense. If not, then we go to the next person in the hierarchy, i.e. to the “next” variable.

The implementation of the EndOfChainExpenseHandler class follows below. It also implements the IExpenseHandler method and it represents – as the name implies – the last member in the approval hierarchy. Its Instance property returns this special member of the chain according to the singleton pattern – more on that here.

public class EndOfChainExpenseHandler : IExpenseHandler
	{
		private EndOfChainExpenseHandler() { }

		public static EndOfChainExpenseHandler Instance
		{
			get { return _instance; }
		}

		public ApprovalResponse Approve(IExpenseReport expenseReport)
		{
			return ApprovalResponse.Denied;
		}

		public void RegisterNext(IExpenseHandler next)
		{
			throw new InvalidOperationException("End of chain handler must be the end of the chain!");
		}

		private static readonly EndOfChainExpenseHandler _instance = new EndOfChainExpenseHandler();
	}

The purpose of this class is to make sure that if the last person in the hierarchy, i.e. the President, is unable to approve the report then it is not passed on to a null reference – as there’s nobody above the President – but that there’s an automatic message handler that gives some default answer. Here we follow the null object pattern. In this case we reject the expense in the Approve method. If we made it this far in the approval chain then the expense must be rejected.

The revised Main method looks as follows:

static void Main(string[] args)
{
	ExpenseHandler william = new ExpenseHandler(new Employee("William Worker", Decimal.Zero));
	ExpenseHandler mary = new ExpenseHandler(new Employee("Mary Manager", new Decimal(1000)));
	ExpenseHandler victor = new ExpenseHandler(new Employee("Victor Vicepres", new Decimal(5000)));
	ExpenseHandler paula = new ExpenseHandler(new Employee("Paula President", new Decimal(20000)));

	william.RegisterNext(mary);
	mary.RegisterNext(victor);
	victor.RegisterNext(paula);

	Decimal expenseReportAmount;
	if (ConsoleInput.TryReadDecimal("Expense report amount:", out expenseReportAmount))
	{
		IExpenseReport expense = new ExpenseReport(expenseReportAmount);
		ApprovalResponse response = william.Approve(expense);
		Console.WriteLine("The request was {0}.", response);
	}
        Console.ReadKey();
}

You’ll see that we have not registered anyone for the President. This is where it becomes important that we set up a default end of chain approver in the ExpenseHandler constructor.

This is a significantly smaller amount of code than before. We start off by wrapping our employees into expense handlers, so each employee becomes an expense handler. Instead of putting them in a list we register the next employee in the hierarchy for each of them. Then as before we read the user’s input from the console, create an expense report and then we go to the first approver in the chain – william. The response from william will be abstracted away in the management chain we set up through the RegisterNext method.

Build and run the application. Enter 1000 and you’ll see that it is approved. Enter 30000 and you’ll see that it is rejected and the caller is oblivious of who and why rejected the request.

So this is the Chain of Responsibility pattern for you!

View the list of posts on Architecture and Patterns here.

Design patterns and practices in .NET: the Template Method design pattern

Introduction

The Template Method pattern is best used when you have an algorithm consisting of certain steps and you want to allow for different implementations of these steps. The implementation details of each step can vary but the structure and order of the steps are enforced.

A good example is games:

  1. Set up the game
  2. Take turns
  3. Game is over
  4. Display the winner

A large number of games can implement this algorithm, such as Monopoly, Chess, card games etc. Each game is set up and played in a different way but they follow the same order.

The Template pattern is very much based around inheritance. The algorithm represents an abstraction and the concrete game types are the implementations, i.e. the subclasses of that abstraction. It is of course plausible that some steps in the algorithm will be implemented in the abstraction while the others will be overridden in the implementing classes.

Note that a prerequisite for this pattern to be applied properly is the rigidness of the algorithm steps. The steps must be known and well defined. The pattern relies on inheritance, rather than composition, and merging two child algorithms into one can prove difficult. If you find that the Template pattern is too limiting in your application then consider the Strategy or the Decorator patterns.

This pattern helps to implement the so-called Hollywood principle: Don’t call us, we’ll call you. It means that high level components, i.e. the superclasses, should not depend on low-level ones, i.e. the implementing subclasses. A base class with a template method is a high level component and clients should depend on this class. The base class will include one or more template method that the subclasses implement, i.e. it is the base class calling the implementation and not vice versa. In other words, the Hollywood principle is applied from the point of view of the base classes: dear implementing classes, don’t call us, we’ll call you.

Demo

Open Visual Studio and create a new blank solution. We’ll simulate a simple dispatch service where shipping an item must go through specific steps regardless of which specific service completes the shipment. Insert a new Console application to the solution.

We’ll start with the most important component of the pattern: the base class that must be respected by each implementation. Add a class called OrderShipment:

public abstract class OrderShipment
    {
        public string ShippingAddress { get; set; }
        public string Label { get; set; }
        public void Ship(TextWriter writer)
        {
            VerifyShippingData();
            GetShippingLabelFromCarrier();
            PrintLabel(writer);
        }

        public virtual void VerifyShippingData()
        {
            if (String.IsNullOrEmpty(ShippingAddress))
            {
                throw new ApplicationException("Invalid address.");
            }
        }
        public abstract void GetShippingLabelFromCarrier();
        public virtual void PrintLabel(TextWriter writer)
        {
            writer.Write(Label);
        }
    }

The template method that implements the order of the steps is Ship. It calls three methods in a specific order. Two of them – VerifyShippingData and PrintLabel are virtual and have a default implementation. They can of course be overridden. The third method, i.e. GetShippingLabelFromCarrier is the abstract method that the base class cannot implement. The superclass has no way of knowing what a service-specific shipping label looks like – it is delegated to the implementations. We’ll simulate two services, UPS and FedEx:

public class FedExOrderShipment : OrderShipment
	{
		public override void GetShippingLabelFromCarrier()
		{
			// Call FedEx Web Service
			Label = String.Format("FedEx:[{0}]", ShippingAddress);
		}
	}
public class UpsOrderShipment : OrderShipment
	{
		public override void GetShippingLabelFromCarrier()
		{
			// Call UPS Web Service
			Label = String.Format("UPS:[{0}]", ShippingAddress);
		}
	}

The implementations should be quite straighforward: they create service-specific shipping labels and set those values to the Label property. There’s of course nothing stopping the concrete classes from overriding any other step in the algorithm. Adding new shipping services is very easy: just create a new implementation. Let’s see how a client would communicate with the services:

static void Main(string[] args)
{
	OrderShipment service = new UpsOrderShipment();
	service.ShippingAddress = "New York";
	service.Ship(Console.Out);

	OrderShipment serviceTwo = new FedExOrderShipment();
	serviceTwo.ShippingAddress = "Los Angeles";
	serviceTwo.Ship(Console.Out);
        
        Console.ReadKey();
}

Run the programme and you’ll see the service-specific labels in the Console. The client calls the Template method Ship which ensures that the steps in the shipping algorithm are carried out in a certain order.

It is of course not optimal to create the specific OrderShipment classes like that, i.e. directly with the new keyword as it introduces tight coupling. Consider using a factory for building the correct implementation. However, this solution is satisfactory for demo purposes.

View the list of posts on Architecture and Patterns here.

Design patterns and practices in .NET: the State pattern

Introduction

The State design pattern allows to change the behaviour of a method through the state of an object. A typical scenario where the state pattern is used when an object goes through different phases. An issue in a bug tracking application can have the following states: inserted, reviewed, rejected, processing, resolved and possibly many others. Depending on the state of the bug the behaviour of the underlying system may also change: some methods will become (un)available and some of them will change their behaviour. You may have seen or even produced code similar to this:

public void ProcessBug()
{
	switch (state)
	{
		case "Inserted":
			//call another method based on the current state
			break;
		case "Reviewed":
			//call another method based on the current state
			break;
		case "Rejected":
			//call another method based on the current state
			break;
		case "Resolved":
			//call another method based on the current state
			break;
	}
}

Here we change the behaviour of the ProcessBug() method based on the state of the “state” parameter, which represents the state of the bug. You can imagine that once a bug has reached the Rejected status then it cannot be Reviewed any more. Also, once it has been reviewed, it cannot be deleted. There are other similar scenarios like that where the available actions and paths depend on the actual state of an object.

Suppose you have public methods to perform certain operations on an object: Insert, Delete, Edit, Resolve, Reject. If you follow the above solution then you will have to insert a switch statement in each and check the actual state of the object and act accordingly. This is clearly not maintainable; it’s easy to get lost in the chain of the logic, it gets difficult to update the code if the rules change and the class code grows unreasonably large compared to the amount of logic carried out.

There are other issues with the naive switch-statement approach:

  • The states are hard coded which offers no or little extensibility
  • If we introduce a new state we have to extend every single switch statement to account for it
  • All actions for a particular state are spread around the actions: a change in one state action may have an effect on the other states
  • Difficult to unit test: each method can have a switch statement creating many permutations of the inputs and the corresponding outputs

In the switch statement solution the states are relegated to simple string properties. In reality they are more likely to be more important objects that are part of the core Domain. Hence that logic should be encapsulated into separate objects that can be tested independently of the other concrete state types.

Demo

We’ll simulate an e-commerce application where an order can go through the following states: New, Shipped, Cancelled. The rules are simple: a new order can be shipped or cancelled. Shipped and cancelled orders cannot be shipped or cancelled again.

Fire up Visual Studio and create a blank solution. Insert a Windows class library called Domains. You can delete Class1.cs. The first item we’ll insert is a simple enumeration:

public enum OrderStatus
	{
		New
		, Shipped
		, Cancelled
	}

Next we’ll insert the interface that each State will need to implement, IOrderState:

public interface IOrderState
	{
		bool CanShip(Order order);
		void Ship(Order order);
		bool CanCancel(Order order);
		void Cancel(Order order);
                OrderStatus Status {get;}
	}

Each concrete state will need to handle these methods independently of the other state types. The Order domain looks like this:

public class Order
	{
		private IOrderState _orderState;

		public Order(IOrderState orderState)
		{
			_orderState = orderState;
		}

		public int Id { get; set; }
		public string Customer { get; set; }
		public DateTime OrderDate { get; set; }
		public OrderStatus Status
		{
			get
			{
				return _orderState.Status;
			}
		}
		public bool CanCancel()
		{
			return _orderState.CanCancel(this);
		}
		public void Cancel()
		{
			if (CanCancel())
				_orderState.Cancel(this);
		}
		public bool CanShip()
		{
			return _orderState.CanShip(this);
		}
		public void Ship()
		{
			if (CanShip())
				_orderState.Ship(this);
		}

		void Change(IOrderState orderState)
		{
			_orderState = orderState;
		}
	}

As you can see each Order related action is delegated to the OrderState object where the Order object is completely oblivious of the actual state. It only sees the interface, i.e. an abstraction, which facilitates loose coupling and enhanced testability.

Let’s implement the Cancelled state first:

public class CancelledState : IOrderState
	{
		public bool CanShip(Order order)
		{
			return false;
		}

		public void Ship(Order order)
		{
			throw new NotImplementedException("Cannot ship, already cancelled.");
		}

		public bool CanCancel(Order order)
		{
			return false;
		}

		public void Cancel(Order order)
		{
			throw new NotImplementedException("Already cancelled.");
		}

		public OrderStatus Status
		{
			get
			{
				return OrderStatus.Cancelled;
			}
		}
	}

This should be easy to follow: we incorporate the cancellation and shipping rules within this concrete state.

ShippedState.cs is also straighyforward:

 
public class ShippedState : IOrderState
	{
		public bool CanShip(Order order)
		{
			return false;
		}

		public void Ship(Order order)
		{
			throw new NotImplementedException("Already shipped.");
		}

		public bool CanCancel(Order order)
		{
			return false;
		}

		public void Cancel(Order order)
		{
			throw new NotImplementedException("Already shipped, cannot cancel.");
		}

		public OrderStatus Status
		{
			get { return OrderStatus.Shipped; }
		}
	}

NewState.cs is somewhat more exciting as we change the state of the order after it has been shipped or cancelled:

 
public class NewState : IOrderState
	{
		public bool CanShip(Order order)
		{
			return true;
		}

		public void Ship(Order order)
		{
			//actual shipping logic ignored, only changing the status
			order.Change(new ShippedState());
		}

		public bool CanCancel(Order order)
		{
			return true;
		}

		public void Cancel(Order order)
		{
			//actual cancellation logic ignored, only changing the status;
			order.Change(new CancelledState());
		}

		public OrderStatus Status
		{
			get { return OrderStatus.New; }
		}
	}

That’s it really, the state pattern is not more complicated than this.

We separated out the state-dependent logic to standalone classes that can be tested independently. It’s now easy to introduce new states later. We won’t have to extend dozens of switch statements – the new state object will handle that logic internally. The Order object is no longer concerned with the concrete state objects – it delegates the cancellation and shipping actions to the states.

View the list of posts on Architecture and Patterns here.

Design patterns and practices in .NET: the Singleton pattern

Introduction

The idea of the singleton pattern is that a certain class should only have one single instance in the application. All other classes that depend on it should all share the same instance instead of a new one. Usually singletons are only created when they are first needed – the same existing instance is returned upon subsequent calls. This is called lazy construction.

The singleton class is responsible for creating the new instance. It also needs to ensure that only this one instance is created and the existing instance is used in subsequent calls.

If you are sure that there should be only one instance of a class then a singleton pattern is certainly a possible solution. Note the following additional rules:

  • The singleton class must be accessible to clients
  • The class should not require parameters for its construction, as input parameters are a sign that multiple different versions of the class are created – this breaks the most important rule, i.e. that “there can be only one”

You may have seen public methods that take the following very simple form:

SingletonClass instance = SingletonClass.GetInstance();

This almost certainly returns a singleton instance. The GetInstance() method is the only way a client can get hold of an instance, i.e. the client cannot call new SingletonClass(). This is due to a private constructor hidden within the SingletonClass implementation.

Basic demo

Open Visual Studio and create a blank solution called Singleton. Add a class library to the solution, remove Class1 and add a class called Singleton to it. The most simple implementation of the singleton pattern looks like this:

public class Singleton
	{
		private static Singleton _instance;

		private Singleton()
		{
		}

		public static Singleton Instance
		{
			get
			{
				if (_instance == null)
				{
					_instance = new Singleton();
				}
				return _instance;
			}
		}
	}

Inspect the code and you’ll note the following characteristics:

  • The class has a single static instance of itself
  • The constructor is private
  • The object instance is available through the static Instance property
  • The property inspects the state of the private instance; if it’s null then it creates a new instance otherwise just returns the existing one – lazy loading

Note that this implementation is not thread safe, so don’t use this example in case the singleton class is accessed from multiple threads, e.g. in an ASP.NET web application. We’ll see a thread-safe example soon.

It’s perfectly acceptable that the Singleton class has multiple public methods. You can then access those methods as follows:

Singleton.Instance.PerformWork();

Singleton instance = Singleton.Instance;
instance.PerformWork();

//pass as parameter
PerformSomeOtherWork(Singleton.Instance);

Add a new class to the class library called ThreadSafeSingleton with the following implementation:

public class ThreadSafeSingleton
	{
		private ThreadSafeSingleton()
		{
		}

		public static ThreadSafeSingleton Instance
		{
			get { return Nested.instance; }
		}

		private class Nested
		{
			static Nested()
			{
			}

			internal static readonly ThreadSafeSingleton instance = new ThreadSafeSingleton();
		}
	}

This is the construction that is recommended for multithreaded environments, such as web applications. Note that it doesn’t use locks which would slow down the performance. Note the following:

  • As in the previous implementation we have a private constructor
  • We also have a public static property to get hold of the singleton instance
  • The implementation relies on the way type initialisers work in .NET
  • The C# compiler will guarantee that a type initialiser is instantiated lazily if it is not marked with the beforefieldinit flag
  • We can ensure this for the nested class Nested by including a static constructor
  • Apparently there’s no need for the static constructor but it does have an important role for the C# compiler
  • Within the nested class we have a static ThreadSafeSingleton field
  • This field is set to a new ThreadSafeSingleton statically when it’s first referenced
  • That reference only occurs in the Instance property getter which refers to the nested ‘instance’ field
  • The first time the Instance getter is called a new ThreadSafeSingleton class is initialised using the ‘instance’ private field of the nested class
  • Subsequent requests will simply receive the existing instance of this static field
  • This way the “There can be only one” rule is enforced

Drawbacks

Singletons introduce tight coupling between the caller and the singleton making the software design more fragile. Singletons are also very difficult to test and are therefore often regarded as an anti-pattern by fierce advocates of testable code. In addition, singletons violate the ‘S’ in SOLID software design: the Single Responsibility Principle. Managing the object lifetime is not considered the responsibility of a class. This should be performed by a separate class.

However, using an Inversion-of-control (IoC) container we can avoid all of these drawbacks. The demo will show you a possible solution.

Demo

The demo will concentrate on an implementation of the pattern where we eliminate its drawbacks outlined above. This means that you should be somewhat familiar with dependency injection and IoC containers in general. You may have come across IoC containers such as StructureMap before. Even if you haven’t met these concepts before, it may still be worthwhile to read on, you may learn some new things.

The demo application will simulate the simultaneous use of a file for file writes. The solution will make use of the .NET task library to perform file writes in a multithreaded fashion.

For each dependency we’ll need an interface to eliminate the tight coupling mentioned before. Each dependency will be resolved using an IoC container called Unity.

Add a new Console app called FileLoggerAsync to the solution and set it as the startup project. Add the following package reference using NuGet:

Unity package in NuGet

The file writer will simply write a series of numbers to a text file. Add the following interface to the project:

public interface INumberWriter
	{
		void WriteNumbersToFile(int max);
	}

The parameter ‘max’ simply means the upper boundary of the series to save to disk.

We will also need an object that will perform the file writes. This will be our singleton class eventually, but it will be hidden behind an interface:

public interface IFileLogger
	{
		void WriteLineToFile(string value);
		void CloseFile();
	}

We don’t want the client to be concerned with the creation of the file logger so the creation will be delegated to an abstract factory – more on this topic here:

public interface IFileLoggerFactory
	{
		IFileLogger Create();
	}

Not much to comment there I presume.

We’ll first implement the singleton file logger which implements the IFileLogger interface:

public class FileLoggerLazySingleton : IFileLogger
	{
		private readonly TextWriter _logfile;
		private const string filePath = @"c:\logfile.txt";

		private FileLoggerLazySingleton()
		{
			_logfile = GetFileStream();
		}

		public static FileLoggerLazySingleton Instance
		{
			get
			{
				return Nested.instance;
			}
		}
		private class Nested
		{
			static Nested()
			{
			}

			internal static readonly FileLoggerLazySingleton instance = new FileLoggerLazySingleton();
		}

		public void WriteLineToFile(string value)
		{
			_logfile.WriteLine(value);
		}

		public void CloseFile()
		{
			_logfile.Close();
		}

		private TextWriter GetFileStream()
		{
			return TextWriter.Synchronized(File.AppendText(filePath));
		}
	}

You’ll recognise most of the code from the thread-safe singleton implementation shown above. The rest handles writing to a file to the specified file path. It is of course not good practice to hard-code the log file like that, but it’ll do in this example. Feel free to change this value but make sure that the file exists.

Next we’ll implement the IFileLoggerFactory interface:

public class LazySingletonFileLoggerFactory : IFileLoggerFactory
	{
		public IFileLogger Create()
		{
			return FileLoggerLazySingleton.Instance;
		}
	}

It returns the singleton instance of the FileLoggerLazySingleton class. It’s time to implement the INumberWriter interface:

public class AsyncNumberWriter : INumberWriter
	{
		private readonly IFileLoggerFactory _fileLoggerFactory;

		public AsyncNumberWriter(IFileLoggerFactory fileLoggerFactory)
		{
			_fileLoggerFactory = fileLoggerFactory;
		}

		public void WriteNumbersToFile(int max)
		{
			IFileLogger myLogger = null;
			Action<int> logToFile = i =>
			{
				myLogger = _fileLoggerFactory.Create();
				myLogger.WriteLineToFile("Ready for next number...");
				myLogger.WriteLineToFile("Logged number: " + i);
			};
			Parallel.For(0, max, logToFile);
			myLogger.CloseFile();
		}
	}

Let’s see what’s happening here. The class will need a factory to retrieve an instance of IFileLogger – the class will be oblivious to the actual implementation type. Hence we have eliminated the tight coupling problem mentioned above. Then we implement the WriteNumbersToFile method:

  • Initially the IFileLogger object will be null
  • Then we create an inline method using the Action object
  • The Action represents a method which has accepts an integer parameter i
  • In the method body we construct the file logger using the file logger factory
  • Then we write a couple of things to the file

The Action will be used in a parallel loop. The loop is the parallel version of a standard for loop. The variable ‘i’ will not be incremented synchronously but in a parallel fashion. The variable will start at 0 and end with the max value. It is injected into the inline function defined by the Action object. So the method defined in the action object will be run in each loop of the Parallel.For construct. It is important to note that with each iteration the IFileLogger object is created using the IFileLoggerFactory object. Thus we simulate that multiple threads access the same file to write some lines.

Now we’re ready to hook up the individual elements in Program.cs. Let’s first set up the Unity container. Insert the following files to the project:

IoC.cs:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Practices.Unity;

namespace FileLogger
{
	public static class IoC
	{
		private static IUnityContainer _container;

		public static void Initialize(IUnityContainer container)
		{
			_container = container;
		}

		public static TBase Resolve<TBase>()
		{
			return _container.Resolve<TBase>();
		}
	}
}

UnityDependencyResolver.cs:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Practices.Unity;

namespace FileLogger
{
	public class UnityDependencyResolver
	{
		private static readonly IUnityContainer _container;
		static UnityDependencyResolver()
		{
			_container = new UnityContainer();
			IoC.Initialize(_container);
		}

		public void EnsureDependenciesRegistered()
		{
			_container.RegisterType<IFileLoggerFactory, LazySingletonFileLoggerFactory>();
		}

		public IUnityContainer Container
		{
			get
			{
				return _container;
			}
		}
	}
}

Don’t worry if you don’t understand what’s going on here. The purpose of these classes is to initialise the Unity dependency container and make sure that when Unity encounters a dependency of type IFileLoggerFactory it creates a LazySingletonFileLoggerFactory ready to be injected.

The last missing bit is Program.cs:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Practices.Unity;

namespace FileLogger
{
	class Program
	{
		private static UnityDependencyResolver _dependencyResolver;
		private static INumberWriter _numberWriter;

		private static void RegisterTypes()
		{
			_dependencyResolver = new UnityDependencyResolver();
			_dependencyResolver.EnsureDependenciesRegistered();
			_dependencyResolver.Container.RegisterType<INumberWriter, AsyncNumberWriter>();
			
		}

		public static void Main(string[] args)
		{
			RegisterTypes();
			_numberWriter = _dependencyResolver.Container.Resolve<INumberWriter>();
			_numberWriter.WriteNumbersToFile(100);
                        Console.WriteLine("File write done.");
			Console.ReadLine();
		}
	}
}

In RegisterTypes we simply register another dependency: INumberWriter is resolved as the concrete type AsyncNumberWriter. In the Main method we then retrieve the number writer dependency and call its WriteNumbersToFile method. Recall that AsyncNumberWriter will then get hold of the file 100 times in each iteration and write a couple of lines to it without closing it at the end of each iteration.

Run the console app and you should see “File write done” almost instantly. The most expensive method, i.e. WriteNumbersToFile has to get hold of a new FileLogger instance only in the first iteration and will get the same instance over and over again in subsequent loops.

Inspect the contents of the file. You’ll see that the iteration was indeed performed in a parallel way as the numbers do not follow any specific order, i.e. the outcome is not deterministic:

Ready for next number…
Ready for next number…
Logged number: 50
Ready for next number…
Logged number: 51
Logged number: 25
Ready for next number…
Ready for next number…
Logged number: 52
Logged number: 26
Ready for next number…
Logged number: 27
Ready for next number…
Ready for next number…
Logged number: 53
Ready for next number…
Logged number: 28
Logged number: 54
Ready for next number…
Logged number: 55
Ready for next number…

etc…

So, we have successfully implemented the singleton pattern in a way that eliminates its weaknesses: this solution is threadsafe, testable and loosely coupled.

UPDATE:

Please read the tip by Learner in the comments section regarding the safety of using static initialisation:

“Cases do exist, however, in which you cannot rely on the common language runtime to ensure thread safety, as in the Static Initialization example.’ as mentioned under “Multithreaded Singleton” section of the following link on MSDN. Instead of using static initialization, the msdn example uses volatile and Double-Check Locking and I have seen people mostly using the same.”

View the list of posts on Architecture and Patterns here.

Design patterns and practices in .NET: the Null Object pattern

Introduction

Null references are a fact of life for a programmer. Some would actually call it a curse. We have to start thinking about null references even in the case of the simplest Console application in .NET.:

static void Main(string[] args)

If there are no arguments passed in to the Main method and you access the args array with args[0] then you’ll get a NullReferenceException because the array has not been initialised. You have to check for args == null already at this stage.

This problem is so pervasive that if your class or method has some dependency then the first thing you need to check is if somebody has tried to pass in a null in a guard clause:

if (dependency == null) throw new ArgumentNullException("Dependency name");

Also, if you call a method that returns an object and you intend to use that object in some way then you may need to include the following check:

if (object == null) return;

…or throw an exception, it doesn’t matter. The point is that your code may be littered with those checks disrupting the flow. It would be a lot more efficient to be able to assume that the return value has been instantiated so that it is a ‘valid’ object that we can use without checking for null values first.

This is exactly the goal of the Null Object pattern: to be able to provide an ’empty’ object instead of ‘null’ so that we don’t need to check for null values all the time. The Null Object will be sort of a zero-implementation of the returned object type where the object does not perform anything meaningful.

Also, there may be times where you just don’t want to make use of a dependency. You cannot pass in null as that would throw an exception – instead you can pass in a valid object that does not perform anything useful. All method calls on the Null Object will be valid, meaning you don’t need to worry about null references. This may occur often in testing scenarios where the test may not care about the behaviour of the dependency as it wants to test the true logic of the system under test instead.

Of course it’s not possible to get rid null checks 100%. There will still be places where you need to perform them.

This pattern is also known by other names: Stub, Active Nothing, Active Null.

Demo

Open Visual Studio and create a new Console application. We’ll simulate that a method expects a caching strategy to cache some object. The example is similar to and builds on the example available here under the discussion on the Adapter Pattern. Insert the following interface:

public interface ICacheStorage
	{
		void Remove(string key);
		void Store(string key, object data);
		T Retrieve<T>(string key);
	}

Insert the following concrete type that implements the HttpContext.Current.Cache type of solution:

public class HttpContextCacheStorage : ICacheStorage
	{

		public void Remove(string key)
		{
			HttpContext.Current.Cache.Remove(key);
		}

		public void Store(string key, object data)
		{
			HttpContext.Current.Cache.Insert(key, data);
		}

		public T Retrieve<T>(string key)
		{
			T itemsStored = (T)HttpContext.Current.Cache.Get(key);
			if (itemsStored == null)
			{
				itemsStored = default(T);
			}
			return itemsStored;
		}
	}

You’ll need to add a reference to System.Web. It’s of course not too wise to rely on the HttpContext in a Console application but that’s beside the point right now.

Add the following private method to Program.cs:

private static void PerformWork(ICacheStorage cacheStorage)
{
	string key = "key";
	object o = cacheStorage.Retrieve<object>(key);
	if (o == null)
	{
		//simulate database lookup
		o = new object();
		cacheStorage.Store(key, o);
	}
	//perform some work on object o...
}

We first check whether object ‘o’ is available in the cache provided by the injected ICacheStorage object. If not then we fetch it from some source, like a DB and then cache it.

What if the caller doesn’t want to cache the object? They might intentionally force a database lookup. If they pass in a null then they’ll get a NullReferenceException. Also, if we want to test this method using TDD then we may not be interested in caching. The test may probably want to test the true logic of the code i.e. the ‘perform some work on object o’ bit, where the caching strategy is irrelevant.

The solution is a caching strategy that doesn’t do any work:

public class NullObjectCache : ICacheStorage
	{
		public void Remove(string key)
		{
			
		}

		public void Store(string key, object data)
		{
			
		}

		public T Retrieve<T>(string key)
		{
			return default(T);
		}
	}

If you pass this implementation to PerformWork then the object will never be cached and the Retrieve method will always return null. This forces PerformWork to look up the object in the storage. Also, you can pass this implementation from a unit test so that the caching dependency is effectively ignored.

Another example

Check out my post on the factory patterns. You will find an example of the Null Object Pattern there in the form of the UnknownMachine class. Instead of CreateInstance of the MachineFactory class returning null in case a concrete type was not found it returns this empty object which doesn’t perform anything.

Consequences

Using this pattern wisely will result in fewer checks for null values: your code will be cleaner and more concise. Also, the need for code branching may decrease.

The caller must obviously know that a Null Object is returned instead of a null, otherwise they may still check for nulls. You can help them by commenting your methods and classes properly. Also, there are certainly cases where an empty NullObject, such as UnknownMachine mentioned above may be confusing for the caller. They will call the TurnOn() method but will not see anything happening. You can extend the NullObject implementation with messages indicating this status, e.g. “Cannot turn on an empty machine.” or something similar.

Null objects are quite often implemented as singletons – the subject of the next post: as all NullObjects implementations of an abstraction are identical, i.e. have the same properties and states they can be shared across the application. This may become cumbersome in large applications where team members may not agree on what a NullObject representation should look like. Should it be empty? Should it have some minimal implementation? Then it’s wiser to allow for multiple representations of the Null Object.

View the list of posts on Architecture and Patterns here.

Design patterns and practices in .NET: the Adapter Pattern

Introduction

The adapter pattern is definitely one of the most utilised designed patterns in software development. It can be used in the following situation:

Say you have a class – Class A – with a dependency on some interface type. On the other hand you have another class – Class B – that would be ideal to inject into Class A as the dependency, but Class B does not implement the necessary interface.

Another situation is when you want to factor out some tightly coupled dependency. A common scenario would be a direct usage of some .NET class, say HttpRuntime in a method that you want to test. You must have a valid HttpRuntime while running the test otherwise the test may fail and that is of course not acceptable. The solution is to let the method be dependent on some abstraction that is injected to it – the question is how to extract HttpRuntime and make it implement some interface instead? After all, it’s unlikely that you’ll have write access to the HttpRuntime class, right?

The solution is to write an adapter that sits between Class A and Class B that wraps Class B’s functionality. An example from the real world: if you travel from Britain to Sweden and try to plug in your PC in your hotel room you’ll fail as the electric sockets in Sweden are different from those in the UK. Class A is your PC’s power plug and Class B is the socket in the wall. The two objects clearly don’t fit but there’s a solution: you can insert a specially made adapter into the socket which makes the “conversion” between the plug and the socket enabling you to pair them up. In summary: the adapter pattern allows classes to work together that couldn’t otherwise due to incompatible interfaces. The adapter will take the form of an interface which has the additional benefit of extensibility: you can write many implementations of this adapter interface making sure your method is not tightly coupled to one single concrete implementation.

Demo

Open Visual Studio and create a new blank solution. Add a new class library called Domain and delete Class1.cs. Add a new class to this project called Customer. We’ll leave it void of any properties, that is not the point here. We want to expose the repository operations by an interface, so insert an interface called ICustomerRepository to the Domain layer:

public interface ICustomerRepository
	{
		IList<Customer> GetCustomers();
	}

Add another class library called Repository and a class called CustomerRepository which implements ICustomerRepository:

public class CustomerRepository : ICustomerRepository
	{
		public IList<Customer> GetCustomers()
		{
			//simulate database operation
			return new List<Customer>();
		}
	}

Add a new class library project called Service and a class called CustomerService to it:

public class CustomerService
	{
		private readonly ICustomerRepository _customerRepository;

		public CustomerService(ICustomerRepository customerRepository)
		{
			_customerRepository = customerRepository;
		}

		public IList<Customer> GetAllCustomers()
		{
			IList<Customer> customers;
			string storageKey = "GetAllCustomers";
			customers = (List<Customer>)HttpContext.Current.Cache.Get(storageKey);
			if (customers == null)
			{
				customers = _customerRepository.GetCustomers();
				HttpContext.Current.Cache.Insert(storageKey, customers);
			}

			return customers;
		}
	}

This bit of code should be straightforward: we want to return a list of customers using the abstract dependency ICustomerRepository. We check if the list is available in the HttpContext cache. If that’s the case then convert the cached value and return the customers. Otherwise fetch the list from the repository and put the value to the cache.

So what’s wrong with the GetAllCustomers method?

Testability

The method is difficult to test because of the dependency on the HttpContext class. If you want to get any reliable result from the test that tests the behaviour of this method you’ll need to somehow provide a valid HttpContext object. Otherwise if the test fails, then why did it fail? Was it a genuine failure, meaning that the customer list was not retrieved? Or was it because there was no HttpContext available? It’s the wrong approach making the test outcome dependent on such a volatile object.

Flexibility

With this implementation we’re stuck with the HttpContext as our caching solution. What if we want to change over to a different one, such as Memcached or System.Runtime? In that case we’d need to go in and manually replace the HttpContext solution to a new one. Even worse, let’s say all your service classes use HttpContext for caching and you want to make the transition to another caching solution for all of them. You probably see how tedious, time consuming and error-prone this could be.

On a different note: the method also violates the Single Responsibility Principle as it performs caching in its body. Strictly speaking it should not be doing this as it then introduces a hidden side effect. The solution to that problem is provided by the Decorator pattern, which we’ll look at in the next blog post.

Solution

It’s clear that we have to factor out the HttpContext.Current.Cache object and let the consumer of CustomerService inject it instead – a simple design principle known as Dependency Injection. As usual, the most optimal option is to write an abstraction that encapsulates the functions of a cache solution. Insert the following interface to the Service layer:

public interface ICacheStorage
	{
		void Remove(string key);
		void Store(string key, object data);
		T Retrieve<T>(string key);
	}

I believe this is quite straightforward. Next we want to update CustomerService to depend on this interface:

public class CustomerService
	{
		private readonly ICustomerRepository _customerRepository;
		private readonly ICacheStorage _cacheStorage;

		public CustomerService(ICustomerRepository customerRepository, ICacheStorage cacheStorage)
		{
			_customerRepository = customerRepository;
			_cacheStorage = cacheStorage;
		}

		public IList<Customer> GetAllCustomers()
		{
			IList<Customer> customers;
			string storageKey = "GetAllCustomers";
			customers = _cacheStorage.Retrieve<List<Customer>>(storageKey);
			if (customers == null)
			{
				customers = _customerRepository.GetCustomers();
				_cacheStorage.Store(storageKey, customers);
			}

			return customers;
		}
	}

We’ve got rid of the HttpContext object, so the next task is to inject it somehow using the ICacheStorage interface. This is the essence of the Adapter pattern: write an adapter class that will resolve the incompatibility of our home-made ICacheStorage interface and HttpContext.Current.Cache. The solution is really simple. Add a new class to the Service layer called HttpContextCacheStorage:

public class HttpContextCacheStorage : ICacheStorage
	{

		public void Remove(string key)
		{
			HttpContext.Current.Cache.Remove(key);
		}

		public void Store(string key, object data)
		{
			HttpContext.Current.Cache.Insert(key, data);
		}

		public T Retrieve<T>(string key)
		{
			T itemsStored = (T)HttpContext.Current.Cache.Get(key);
			if (itemsStored == null)
			{
				itemsStored = default(T);
			}
			return itemsStored;
		}
	}

This is the adapter class that encapsulates the HttpContext caching object. You can now inject this concrete class into CustomerService. In the future if you want to make use of a different caching solution then all you need to do is to write another adapter for that: MemCached, Velocity, System.Runtime.Cache, you name it.

View the list of posts on Architecture and Patterns here.

Test Driven Development in .NET Part 9: advanced topics in the Moq framework

In this post we’ll continue our discussion on the Moq framework. We’ll look at the following topics:

  • Strict mocking
  • Mocking base classes
  • Recursive mocking
  • Mock Repository

We’ll build on the test suite we’ve working on in this series on Moq. This is the last post dedicated to Moq in the TDD series.

Strict and loose mocking

Strict mocking means that we must set up expectations on all members of a mock object otherwise an exception is thrown. Loose mocking on the other hand does not require explicit expectations on all class members; instead default values are returned when no expectation is explicitly declared.

Default behaviour in Moq is loose coupling.

To demonstrate strict mocking locate the call to _productRepository.Save in ProductService.Create. Let’s say that for whatever reason we make another call on _productRepository right after Save:

_productRepository.Fetch();

Have VS create the method stub for you. Run the tests and they should all pass. Go to the test called Then_repository_save_should_be_called() in When_creating_a_product.cs. We set up one expectation on the product repository, that is the Save method must be called. VerifyAll will verify that our expectations are met and ignores all other things. It does not care that we call Fetch after Save in the Create method and this is due to loose mocking. If there’s no expectation on Fetch, then it is ignored. If you want to change the behaviour to strict mocking then it’s very easy to do:

var mockProductRepository = new Mock<IProductRepository>(MockBehavior.Strict);

Run the tests and Then_repository_save_should_be_called() will fail. The exception thrown reads as follows: IProductRepository.Fetch() invocation failed with mock behavior Strict. Moq saw an interaction with the customer repository that was not set up properly so it threw an exception.

Base class mocking

We often have base classes for related objects where common behaviour and abstract properties and methods are contained in the base. We want to be able to verify that the derived classes made the call to a method or property in the abstract class.

We want the customer to go through an initial property setting phase. This action will be performed through an abstract base class called PropertySetterBase. Open When_creating_a_product.cs and add a new method called Then_initial_properties_should_be_set() with the following Arrange and Act sections:

[Test]
		public void Then_initial_properties_should_be_set()
		{
			//Arrange
			Mock<ProductPropertySetter> productPropertySetter = new Mock<ProductPropertySetter>();
			Product product = new Product("ProductA", "High qualitiy product");

			//Act
			productPropertySetter.Object.SetBasicProperties(product);

		
		}

Let VS create the missing classes and methods for you a we saw in previous blog posts in this series. As we said we want to verify that a certain method is called in the base class of the ProductPropertySetter object.

Note that we create a mock of the object that we want to test. It is not some interface dependency we want to mock but the system under test itself. The Object property will then yield the implementation of the mocking. Finally we call the SetBasicProperties on the mock object.

Add the following assertion:

//Assert
productPropertySetter.Verify(x => x.SetProperties(product));

This time do not let VS create the method for you as it will insert it in the ProductPropertySetter class. Instead insert an abstract class called PropertySetterBase in the Domain layer and have ProductPropertySetter derive from it. Insert a method called SetProperties in PropertySetterBase which accepts a Product object. This should get rid of the compiler error.

Run the tests and the new test method will fail of course as SetBasicProperties is not implemented yet. Let’s implement it as follows:

public void SetBasicProperties(Product product)
		{
			SetProperties(product);
		}

Run the tests… …you should still see that the new test method fails. Remember the problem we had in the previous post, that we had to help Moq with the ‘virtual’ keyword? We have the same case here. Insert the ‘virtual’ keyword in the signature of the SetProperties method and then the test will pass.

Recursive mocking

We looked at recursive mocking in this post where we looked at how mock a chain of properties. There’s another form of recursive mocking: say that a class has a dependency which then creates another dependency within one of the class methods. A classic example is a factory that builds some other object based on some inputs. This other object is then an indirect dependency which the class uses in some way. We would like to be able to verify that the appropriate method of the dependency injected by the factory was called.

It’s time to create a new domain object, I don’t want to expand Product and Customer even more. The new domain is called Shop and it will have an Address object property. The address will be created using a factory. Add a new folder called Shop_Tests in the Tests layer and a new file called When_creating_a_new_shop.cs in that folder:

[TestFixture]
	public class When_creating_a_new_shop
	{
		[Test]
		public void The_address_should_be_created()
		{

		}
	}

In order to create a new ship we’ll need to have a ShopService that accepts an AddressFactory and a ShopRepository. So let’s mock them up:

//Arrange
			Mock<IShopRepository> mockShopRepository = new Mock<IShopRepository>();
			Mock<IAddressFactory> mockAddressFactory = new Mock<IAddressFactory>() { DefaultValue = DefaultValue.Mock };
			ShopService shopService = new ShopService(mockShopRepository.Object, mockAddressFactory.Object);

This all should be quite straighforward by now except for the ‘DefaultValue = DefaultValue.Mock’ bit. When we set up the address factory we set the default value on it to return a mock. So if we don’t have a setup for any of the properties or the return values on this address factory it is going to return back a mock instance of whatever value it should be if it can. The address factory will return some concrete class and by providing the DefaultValue of Mock Moq will try to mock the returned concrete object coming out from the factory – as long as the return value is mockable. As we’re planning to build an abstract factory, i.e. the return type is another interface, that should be no problem. Add the following to the Assert section:

IAddressCreator addressCreator = mockAddressFactory.Object.CreateFrom(It.IsAny<string>());

This syntax should be clear by now: the address factory will have a CreateFrom method that accepts a string and returns an IAddressCreator. As usual, let VS create the missing elements for you. We’re getting hold of the address creator to be able to set up our expectations on it. The result is an IAddressFactory, but we need to get hold of the mock object, so add one more line to Arrange:

Mock<IAddressCreator> mock = Mock.Get(addressCreator);

Next we’ll do some action:

//Act
shopService.Create(new Shop());

The address of the shop will be built using the IAddressCreator. This interface will have a method called CreateAddressFor which accepts a Shop object. We want to assert that this CreateAddressFor method was called:

//Assert
mock.Verify(x => x.CreateAddressFor(It.IsAny<Shop>()));

Run the tests and as usual this new test should fail, which is good. Implement ShopService.Create as follows:

public void Create(Shop shop)
		{
			IAddressCreator addressCreator = addressFactory.CreateFrom("USA");
			Address address = addressCreator.CreateAddressFor(shop);
			shop.Address = address;
			shopRepository.Save(shop);
		}

Create the missing objects and methods using Visual Studio. The test should pass meaning that we successfully verified that the CreateAddressFor method was called on a dependency injected by the factory which in turn was also injected in the constructor.

Using the MockRepository to handle a lot of dependencies

At times you may be dealing with a lot of dependencies of an object. Many dependencies can point to different problems, such as the object is trying to achieve too much, but that’s beside the point here.

It is tedious to write setups and verifications on each dependency. The MockRepository object makes handling a large number of dependencies easier. We’ll return to the When_creating_a_product.cs class. Insert a new test method which is an updated version of Then_repository_save_should_be_called():

[Test]
		public void Then_repository_save_should_be_called_MOCK_REPO()
		{
			//Arrange
			MockRepository mockRepository = new MockRepository(MockBehavior.Loose) { DefaultValue = DefaultValue.Mock };
			Mock<IProductRepository> productRepository = mockRepository.Create<IProductRepository>();
			Mock<IProductIdBuilder> productIdBuilder = mockRepository.Create<IProductIdBuilder>();
			productRepository.Setup(p => p.Save(It.IsAny<Product>()));
			ProductService productService = new ProductService(productRepository.Object, productIdBuilder.Object);

			//Act
			productService.Create(new ProductViewModel());
			//Assert
			mockRepository.Verify();

		}

We initialise a new MockRepository and we indicate the mock behaviour and the default value. These will be valid for all mock objects created by the repository, you don’t need to specify them one by one. Next we create the mocked dependencies for the ProductService using the mock repository directly. At the end we call the Verify method on the mock repository. This is a short-hand solution instead of calling Verify on each and every expectation we set up. The mock repository will go through each mock element and run the verifications accordingly.

Protected members

In some cases you may want to test that a protected member of a class was called. This is not a straightforward scenario as protected members are not available outside the class. Moq can solve the problem but it comes with some ‘buts’: there’s no intellisense as protected members are invisible; instead they must be referred to using strings. Also, instead of the ‘It’ keyword ‘ItExpr’ must be used. In addition, the protected member under test must be marked with the virtual keyword as we saw in previous examples.

You’ll need to include the Moq.Protected namespace with a using statement. An example of setting up our expectations on a protected member is the following:

mockIdBuilder.Protected().Setup<string>("GetResult", ItExpr.IsAny<string>())
				.Returns("hello").Verifiable();

The Protected() extension method is the gateway to testing a protected member. Setup denotes the return type of the protected member. “GetResult” is the name of protected method under test and it is followed by the parameters the method accepts using the ItExpr method: in this case the imaginary GetResult method accepts one parameter of type string. We set up the method to return the string “hello” and mark it as Verifiable. If you miss this step then Moq will not be able to check if this expectation has been met. It does not make any difference if you call Verify in the Assert section at the end, the setup of a protected member must be marked as verifiable.

This ends our discussion of the Moq framework. I hope you now know enough to start writing your own mock objects in your tests.

Test Driven Development in .NET Part 8: verify class properties with mocks using the Moq framework

So far in this series on Moq we’ve been concentrating on mocking methods, their return values and exceptions they may throw. However, we should not forget about object properties: setters and getters. Writing tests may become quite involved when we need to set up a hierarchy of properties: a property of an object is another object which has a property that is also an object which has a property etc.

The demos build on the project where we left off in the previous post.

Setters

The most basic test we can perform is check if a property has been assigned a value, i.e. the property setter has been called. We’ll add a property to ICustomerRepository called LocalTimeZone and see if that setter has been called when creating a new Customer.

Open the solution we’ve been working with in this series on Moq. Add a new class called When_creating_a_customer to the CustomerService_Tests folder and insert the following test skeleton:

[TestFixture]
    public class When_creating_a_customer
    {
        [Test]
        public void The_local_timezone_should_be_set()
        {
            //Arrange
            Mock<ICustomerRepository> mockCustomerRepository = new Mock<ICustomerRepository>();
            Mock<ICustomerStatusFactory> mockCustomerStatusFactory = new Mock<ICustomerStatusFactory>();
            CustomerViewModel customerViewModel = new CustomerViewModel()
            {
                FirstName = "Elvis"
                , LastName = "Presley"
                , Status = CustomerStatus.Gold
            };
            CustomerService customerService = new CustomerService(mockCustomerRepository.Object, mockCustomerStatusFactory.Object);
                        
            //Act
            customerService.Create(customerViewModel);
        }
    }

This should be familiar from the discussions in the previous post. What we’re missing is the Assert section:

//Assert
mockCustomerRepository.VerifySet(c => c.LocalTimeZone = It.IsAny<string>());

VerifySet means that we want to verify that a setter has been called. We specify which setter we mean by the lambda expression. Let VS create the property stub for LocalTimeZone. The new test should fail as we do not set this property in the Create method. Let’s modify it now:

public void Create(CustomerViewModel customerToCreate)
        {
            var customer = new Customer(
                customerToCreate.FirstName, customerToCreate.LastName);

            customer.StatusLevel =
                _customerStatusFactory.CreateFrom(customerToCreate);
            _customerRepository.LocalTimeZone = TimeZone.CurrentTimeZone.StandardName;
            if (customer.StatusLevel == CustomerStatus.Gold)
            {
                _customerRepository.SaveSpecial(customer);
            }
            else
            {
                _customerRepository.Save(customer);
            }
        }

The test should now pass along with all the other tests.

Getters

We want to be able to update a customer. The Customer object will receive a new property called RegionId which will be set using a getter of the ICustomerRepository interface. We want to make sure that this getter has a value, i.e. is not null.

Add a new class to CustomerService_Tests called When_updating_a_customer with the following starting point for a test:

[TestFixture]
    public class When_updating_a_customer
    {
        [Test]
        public void The_region_id_of_repository_should_be_used()
        {
            //Arrange
            Mock<ICustomerRepository> mockCustomerRepository = new Mock<ICustomerRepository>();
            Mock<ICustomerStatusFactory> mockCustomerStatusFactory = new Mock<ICustomerStatusFactory>();
            CustomerViewModel customerViewModel = new CustomerViewModel()
            {
                FirstName = "Elvis"
                ,LastName = "Presley"
                ,Status = CustomerStatus.Gold
            };
            CustomerService customerService = new CustomerService(mockCustomerRepository.Object, mockCustomerStatusFactory.Object);

            //Act
            customerService.Update(customerViewModel);
        }

    }

Note the addition of the Update method, let VS create it for you.

Add the following setup statement just before the construction of the CustomerService object:

mockCustomerRepository.Setup(c => c.RegionId).Returns(123);

This looks very much like setting up the return value of a method. Let VS take care of the RegionId property. Make sure the generated property is of type nullable integer. We’re missing the Assert statement:

//Assert
            mockCustomerRepository.VerifyGet(x => x.RegionId);

VerifyGet – as you may have guessed – means that we want to make sure that a getter was called. The getter is then defined by the lambda expression. Run the test and as usual the new one should fail. Let’s implement the Update method:

public void Update(CustomerViewModel customerViewModel)
        {
            var customer = new Customer(
                customerViewModel.FirstName, customerViewModel.LastName);
            int? regionId = _customerRepository.RegionId;
            if (!regionId.HasValue)
            {
                throw new InvalidRegionIdException();
            }
            customer.RegionId = regionId.Value;
            _customerRepository.Update(customer);
        }

As usual, let VS take care of the missing elements. The test should now pass.

Mocking property hierarchies

One extra scenario we must consider is when a dependency returns a deeply nested property and we want to access the end of the nested property chain. Example: the Settings property of a Customer is returned as follows:

customer.Settings = _customerRepository.SystemConfig.BaseInformation.Settings;

This is exactly what we want to achieve. Open When_creating_a_customer.cs and insert a new test method called The_customer_settings_should_be_retrieved(). Arrange and Act will be familiar:

//Arrange
			Mock<ICustomerRepository> mockCustomerRepository = new Mock<ICustomerRepository>();
			Mock<ICustomerStatusFactory> mockCustomerStatusFactory = new Mock<ICustomerStatusFactory>();
			CustomerViewModel customerViewModel = new CustomerViewModel()
			{
				FirstName = "Elvis"
				,LastName = "Presley"
				,Status = CustomerStatus.Gold
			};
			CustomerService customerService = new CustomerService(mockCustomerRepository.Object, mockCustomerStatusFactory.Object);

			//Act
			customerService.Create(customerViewModel);

We only need to include the Assert phase now. We want to assert that the Settings property has been retrieved:

mockCustomerRepository.VerifyGet(x => x.SystemConfig.BaseInformation.Settings);

Create the necessary classes and properties. Don’t worry about the implementations. Run the tests and the new test will fail as usual. Go to CustomerService.Create and include the following call just below _customerStatusFactory.CreateFrom:

customer.Settings = _customerRepository.SystemConfig.BaseInformation.Settings;

Run the tests and 3 of them should fail:

Tests fail when testing for nested properties

We forgot to properly set up the expectations on the nested properties so they all return null causing a null reference exception to be thrown. Include the following Setup in the Arrange section:

                                mockCustomerRepository
				.Setup(x => x.SystemConfig.BaseInformation.Settings)
				.Returns(new Settings());

Run the tests again and you’ll see that we still have the previous 3 failing methods. However, the error message on the test we’re currently working on is different: invalid setup on a non-virtual member. It turns out that this is due to Moq, not our code really. Moq can make it easy to mock up a chain of nested properties like that but it needs some help. The solution is to mark the properties in that chain with the ‘virtual’ keyword to let Moq know it is a mockable property:

public class SystemConfig
	{
		public virtual BaseInformation BaseInformation { get; set; }
	}

public class BaseInformation
	{
		public virtual Settings Settings { get; set; }
	}

public class Settings
	{
		public virtual string Colour { get; set; }
	}

Another solution would be to turn BaseInformation, Settings and SystemConfig properties to interfaces, but I think in this case it would be wrong. They are object properties, not interfaces.

This is the trade-off. In other mocking frameworks you may need to set up the expectations on each nest property one by one. Moq makes our lives easier by mocking up each property in the chain for us, but in turn we need to add the ‘virtual’ modifier. Run the tests and the current one should pass. Add the same setup to the other two failing tests as well.

Stubbing properties

There are occasions where a dependency has many properties that need to be set up. It is tedious to call a Setup on each. Instead we can instruct Moq to create a stub of the dependency as opposed to a mock. Stubbing is performed via the SetupAllProperties method:

mockCustomerRepository.SetupAllProperties();
mockCustomerRepository.Object.SystemConfig = new SystemConfig();

After calling SetupAllProperties we can set the value of each property as shown in the example.

In the next post we’ll investigate some more advanced features of this mocking framework.

Test Driven Development in .NET Part 7: verify method arguments and exceptions with mocks using the Moq framework

In this post we’ll discuss how to verify arguments and exceptions using Moq.

Arguments

Arguments to our mock objects are important components. Therefore we should verify that the correct input parameters were passed to them. This will ensure that the right input data is used by the dependency injected into the system under test. Different parameters will also enable to us to check the behaviour of the system, e.g. by controlling the execution flow.

Open the solution we worked on in the previous post. The domain experts say that the product ID should be somehow built based on the product name. We want to extend the IProductIdBuilder interface with an overloaded BuildProductIdentifier method that accepts the Name of the product. Then we want to verify that the input parameter passed to this method is valid.

Locate When_creating_a_product.cs and add the following test:

[Test]
        public void The_product_id_should_be_created_from_product_name()
        {

        }

The Arrange phase look as follows:

//Arrange
            ProductViewModel productViewModel = new ProductViewModel() { Description = "Nice product", Name = "ProductA" };
            Mock<IProductRepository> mockProductRepository = new Mock<IProductRepository>();
            Mock<IProductIdBuilder> mockIdBuilder = new Mock<IProductIdBuilder>();
            mockIdBuilder.Setup(i => i.BuildProductIdentifier(It.IsAny<String>()));
            ProductService productService = new ProductService(mockProductRepository.Object
                , mockIdBuilder.Object); 

Note the String parameter passed to BuildProductIdentifier. Let Visual Studio create the overloaded method in the IProductIdBuilder interface. If you recall from the first post on Moq this type of setup will help verify that the BuildProductIdentifier method was called when the Create method in ProductService.cs is called.

Act is simple:

//Act
productService.Create(productViewModel);

Assert is a bit more complicated:

//Assert
mockIdBuilder.Verify(m => m.BuildProductIdentifier(It.Is<String>(n => n.Equals(productViewModel.Name, StringComparison.InvariantCultureIgnoreCase))));

Here we say that in the BuildProductIdentifier method we want to check that the parameter passed into it will be the same as the Name property of the product.

Run the tests and this new test should fail. Look at the reason in the Test Explorer window: an InvalidProductIdException was thrown. Of course we forgot to set up the expected return value of the BuildProductIdentifier method. Go ahead and replace the Setup as follows:

mockIdBuilder.Setup(i => i.BuildProductIdentifier(It.IsAny<String>())).Returns(new ProductIdentifier());

Re-run the tests and… …our new test still fails with the same message. Can you guess why? There are now two methods in IProductIdBuilder and the current Setup sets up the overloaded method whereas the current implementation of the Create method in ProductService uses the parameterless version which will return a null object. So we must pass in the product name to BuildProductIdentifier in the Create method:

product.Identifier = _productIdBuilder.BuildProductIdentifier(product.Name);

Re-run the tests and you’ll see that the new test passes but we have two failing ones: The_product_should_be_saved_if_id_was_created() and Then_repository_save_should_be_called(). The reason is the same as above: we’re setting up the wrong BuildProductIdentifier method now that we have the new policy of creating the ID from the product name. I hope you’re starting to see the real power of TDD.

Adjust the setup method in Then_repository_save_should_be_called() as follows:

mockIdBuilder.Setup(x => x.BuildProductIdentifier(It.IsAny<string>())).Returns(new ProductIdentifier());

And in The_product_should_be_saved_if_id_was_created():

mockIdBuilder.Setup(i => i.BuildProductIdentifier(productViewModel.Name)).Returns(new ProductIdentifier());

The tests should now pass.

You can even test the new test method by passing in the Description property of the Product object to BuildProductIdentifier in the Create method of ProductService. The test will fail because the expected Name property does not equal the actual value passed in.

Controlling the flow of execution

For the flow control demo we’ll work with a new object in our domain: Customer. The domain experts say that Customers must have a first and last name and should have a Status. The developers then translate the requirements into a simple code-branching mechanism when a new Customer needs to be saved: either save it ‘normally’ or in some special way depending on the customer status.

As usual we’ll take a test-first approach. Add a new folder to the Tests project called CustomerService_Tests. Add a new class called CustomerServiceTests to that folder with the first test method looking like this:

[TestFixture]
    public class When_creating_a_gold_status_customer
    {
        [Test]
        public void A_special_save_routine_should_be_used()
        {

        }
    }

We know that we’ll need some type of Repository to save the customer and also a factory that will yield the appropriate Status type based on some parameters. Insert the following Arrange section:

//Arrange
            Mock<ICustomerRepository> mockCustomerRepository = new Mock<ICustomerRepository>();
            Mock<ICustomerStatusFactory> mockCustomerStatusFactory = new Mock<ICustomerStatusFactory>();
            CustomerViewModel customerViewModel = new CustomerViewModel()
            {
                FirstName = "Elvis"
                , LastName = "Presley"
                , Status = CustomerStatus.Gold
            };

            CustomerService customerService = new CustomerService(mockCustomerRepository.Object, mockCustomerStatusFactory.Object);

Create the missing objects, constructors etc. using Visual Studio. Make sure that CustomerStatus is created as an enum type. In Act we’ll call the Create method:

//Act
customerService.Create(customerViewModel);

In the assert phase we want to verify that the SaveSpecial method is called:

//Assert
mockCustomerRepository.Verify(
                x => x.SaveSpecial(It.IsAny<Customer>()));

The test will of course fail at first so we’ll fill up our newly created objects:

Customer.cs:

public class Customer
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public CustomerStatus StatusLevel { get; set; }

        public Customer(string firstName, string lastName)
        {
            FirstName = firstName;
            LastName = lastName;
        }
    }

CustomerStatus:

public enum CustomerStatus
    {
        Bronze,
        Gold,
        Platinum
    }

CustomerViewModel:

public class CustomerViewModel
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public CustomerStatus Status { get; set; }
    }

ICustomerRepository:

public interface ICustomerRepository
    {
        void SaveSpecial(Customer customer);
        void Save(Customer customer);
    }

ICustomerStatusFactory:

public interface ICustomerStatusFactory
    {
        CustomerStatus CreateFrom(CustomerViewModel customerToCreate);
    }

CustomerService:

public class CustomerService
    {
        private readonly ICustomerRepository _customerRepository;
        private readonly ICustomerStatusFactory _customerStatusFactory;

        public CustomerService(ICustomerRepository customerRepository, ICustomerStatusFactory customerStatusFactory)
        {
            _customerRepository = customerRepository;
            _customerStatusFactory = customerStatusFactory;
        }

        public void Create(CustomerViewModel customerToCreate)
        {
            var customer = new Customer(
                customerToCreate.FirstName, customerToCreate.LastName);

            customer.StatusLevel =
                _customerStatusFactory.CreateFrom(customerToCreate);

            if (customer.StatusLevel == CustomerStatus.Gold)
            {
                _customerRepository.SaveSpecial(customer);
            }
            else
            {
                _customerRepository.Save(customer);
            }
        }
    }

Re-run the tests and this new test still fails. We forgot to tell Moq that ICustomerStatusFactory should return the Gold status. Add the following the Arrange section of the test method:

mockCustomerStatusFactory.Setup(
                c => c.CreateFrom(It.Is(u => u.Status == CustomerStatus.Gold)))
                .Returns(CustomerStatus.Gold);

This means that if the Status of the CustomerViewModel is Gold then the CreateFrom of the mock object should return Gold. Run the test and you’ll see that indeed the SaveSpecial route is taken and the test passes.

Mocking exceptions

We can set up the dependencies so that they throw some specific exception when invoked. This way you can test how the system under test behaves when it encounters an exception from a dependency. We’ll simulate the following: we want the IProductIdBuilder dependency throw an exception and check if the exception is handled correctly.

Open When_creating_a_product.cs and add a new test method:

[Test]
        public void An_exception_should_be_raised_when_id_is_invalid()
        {

        }

Add the following skeleton to the method body:

//Arrange
            ProductViewModel productViewModel = new ProductViewModel() { Description = "Nice product", Name = "ProductA" };
            Mock<IProductRepository> mockProductRepository = new Mock<IProductRepository>();
            Mock<IProductIdBuilder> mockIdBuilder = new Mock<IProductIdBuilder>();
            mockIdBuilder.Setup(i => i.BuildProductIdentifier(It.IsAny<String>())).Returns(new ProductIdentifier());
            ProductService productService = new ProductService(mockProductRepository.Object
                , mockIdBuilder.Object); 

            //Act
            productService.Create(productViewModel);
            //Assert

This code should be familiar by now.

We want an exception to be thrown if the ID is incorrect. Add the following attribute to the method:

[ExpectedException(typeof(ProductCreatedException))]
        public void An_exception_should_be_raised_when_id_is_invalid()

Let VS create the ProductCreatedException class for you. Go ahead and make the new class inherit from Exception. Next we want to set up IProductIdBuilder so that it throws an ProductIdNotCreatedException when it’s called. Replace the current Setup call with the following:

mockIdBuilder.Setup(m => m.BuildProductIdentifier(It.IsAny<string>())).Throws<ProductIdNotCreatedException>();

Again, let VS create ProductIdNotCreatedException for you and have it extend the Exception object.

This should be clear: we don’t want the BuildProductIdentifier to return an ID but to throw an exception instead. Run the tests and the new test will obviously fail. The Create method does not throw any exception when there’s an unhandled exception throws from BuildProductIdentifier. Modify the Create method as follows:

public void Create(ProductViewModel productViewModel)
        {
            try
            {
                Product product = ConvertToDomain(productViewModel);
                product.Identifier = _productIdBuilder.BuildProductIdentifier(product.Name);
                if (product.Identifier == null)
                {
                    throw new InvalidProductIdException();
                }
                _productRepository.Save(product);
            }
            catch (ProductIdNotCreatedException e)
            {
                throw new ProductCreatedException(e);
            }
        }

Let VS insert a new constructor for ProductCreatedException and run the tests. They should all pass.

In the next post we’ll take a look at how to verify class properties with Moq.

Elliot Balynn's Blog

A directory of wonderful thoughts

Software Engineering

Web development

Disparate Opinions

Various tidbits

chsakell's Blog

WEB APPLICATION DEVELOPMENT TUTORIALS WITH OPEN-SOURCE PROJECTS

Once Upon a Camayoc

ARCHIVED: Bite-size insight on Cyber Security for the not too technical.