SOLID design principles in .NET: the Dependency Inversion Principle Part 4, Interception and conclusions

Introduction

I briefly mentioned the concept of Interception in the this post. It is a technique that can help you implement cross-cutting concerns such as logging, tracing, caching and other similar activities. Cross-cutting concerns include actions that are not strictly related to a specific domain but can potentially be called from many different objects. E.g. you may want to cache certain method results pretty much anywhere in your application so potentially you’ll need an ICacheService dependency in many places. In the post mentioned above I went through a possible DI pattern – ambient context – to implement such actions with all its pitfalls.

If you’re completely new to these concepts make sure you read through all the previous posts on DI in this series. I won’t repeat what was already explained before.

The idea behind Interception is quite simple. When a consumer calls a service you may wish to intercept that call and execute some action before and/or after the actual service is invoked.

It happens occasionally that I do the shopping on my way home from work. This is a real life example of interception: the true purpose of my action is to get home but I “intercept” that action with another one, namely buying some food. I can also do the shopping when I pick up my daughter from the kindergarten or when I want to go for a walk. So I intercept the main actions PickUpFromKindergarten() and GoForAWalk() with the shopping action because it is convenient to do so. The Shopping action can be injected into several other actions so in this case it may be considered as a Cross-Cutting Concern. Of course the shopping activity can be performed in itself as the main action, just like you can call a CacheService directly to cache something, in which case it too can be considered as the main action.

The main source in the series on Dependency Injection is based on the work of Mark Seemann and his great book on DI.

The problem

Say you have a service that looks up an object with an ID:

public interface IProductService
{
	Product GetProduct(int productId);
}
public class DefaultProductService : IProductService
{
	public Product GetProduct(int productId)
	{
		return new Product();
	}
}

Say you don’t want to look up this product every time so you decide to cache the result for 10 minutes.

Possible solutions

Total lack of DI

The first “solution” is to directly implement caching within the GetProduct method. Here I’m using the ObjectCache object located in the System.Runtime.Caching namespace:

public Product GetProduct(int productId)
{
	ObjectCache cache = MemoryCache.Default;
	string key = "product|" + productId;
	Product p = null;
	if (cache.Contains(key))
	{
		p = (Product)cache[key];
	}
	else
	{
		p = new Product();
		CacheItemPolicy policy = new CacheItemPolicy();
		DateTimeOffset dof = DateTimeOffset.Now.AddMinutes(10);
		policy.AbsoluteExpiration = dof;
		cache.Add(key, p, policy);
	}
	return p;
}

We check the cache using the cache key and retrieve the Product object if it’s available. Otherwise we simulate a database lookup and put the Product object in the cache with an absolute expiration of 10 minutes.

If you’ve read through the posts on DI and SOLID then you should know that this type of code has numerous pitfalls:

  • It is tightly coupled to the ObjectCache class
  • You cannot easily specify a different caching strategy – if you want to increase the caching time to 20 minutes then you’ll have to come back here and modify the method
  • The method signature does not tell anything to the caller about caching, so it violates the idea of an Intention Revealing Interface mentioned before
  • Therefore the caller will need to intimately know the internals of the GetProduct method
  • The method is difficult to test as it’s impossible to abstract away the caching logic. The test result will depend on the caching mechanism within the code so it will be inconclusive

Nevertheless you have probably encountered this style of coding quite often. There is nothing stopping you from writing code like that. It’s quick, it’s dirty, but it certainly works.

As an attempt to remedy the situation you can factor out the caching logic to a service:

public class SystemRuntimeCacheStorage
{
	public void Remove(string key)
	{
		ObjectCache cache = MemoryCache.Default;
		cache.Remove(key);
	}

	public void Store(string key, object data)
	{
		ObjectCache cache = MemoryCache.Default;
		cache.Add(key, data, null);
	}

	public void Store(string key, object data, DateTime absoluteExpiration, TimeSpan slidingExpiration)
	{
		ObjectCache cache = MemoryCache.Default;
		var policy = new CacheItemPolicy
		{
			AbsoluteExpiration = absoluteExpiration,
			SlidingExpiration = slidingExpiration
		};

		if (cache.Contains(key))
		{
			cache.Remove(key);
		}
		cache.Add(key, data, policy);
	}

	public T Retrieve<T>(string key)
	{
		ObjectCache cache = MemoryCache.Default;
		return cache.Contains(key) ? (T)cache[key] : default(T);
	}
}

This is a generic class to store, remove and retrieve objects of type T. As the next step you want to call this service from the DefaultProductService class as follows:

public class DefaultProductService : IProductService
{
	private SystemRuntimeCacheStorage _cacheStorage;

	public DefaultProductService()
	{
		_cacheStorage = new SystemRuntimeCacheStorage();
	}

	public Product GetProduct(int productId)
	{
		string key = "product|" + productId;
		Product p = _cacheStorage.Retrieve<Product>(key);
		if (p == null)
		{
        		p = new Product();
			_cacheStorage.Store(key, p);
		}
		return p;
	}
}

We’ve seen a similar example in the previous post where the consuming class constructs its own dependency. This “solution” has the same errors as the one above – it’s only the stacktrace that has changed. You’ll get the same faulty design with a factory as well. However, this was a step towards a loosely coupled solution.

Dependency injection

As you know by now abstractions are the way to go to reach loose coupling. We can factor out the caching logic into an interface:

public interface ICacheStorage
{
	void Remove(string key);
	void Store(string key, object data);
	void Store(string key, object data, DateTime absoluteExpiration, TimeSpan slidingExpiration);
	T Retrieve<T>(string key);
}

Then using constructor injection we can inject the caching mechanism as follows:

public class DefaultProductService : IProductService
{
	private readonly ICacheStorage _cacheStorage;

	public DefaultProductService(ICacheStorage cacheStorage)
	{
		_cacheStorage = cacheStorage;
	}

	public Product GetProduct(int productId)
	{
		string key = "product|" + productId;
		Product p = _cacheStorage.Retrieve<Product>(key);
		if (p == null)
		{
			p = new Product();
			_cacheStorage.Store(key, p);
		}
		return p;
	}
}

Now we can inject any type of concrete caching solution which implements the ICacheStorage interface. As far as tests are concerned you can inject an empty caching solution using the Null object pattern so that the test can concentrate on the true logic of the GetProduct method.

This is certainly a loosely coupled solution but you may need to inject similar interfaces to a potentially large number of services:

public ProductService(ICacheStorage cacheStorage, ILoggingService loggingService, IPerformanceService performanceService)
public CustomerService(ICacheStorage cacheStorage, ILoggingService loggingService, IPerformanceService performanceService)
public OrderService(ICacheStorage cacheStorage, ILoggingService loggingService, IPerformanceService performanceService)

These services will permeate your class structure. Also, you may create a base service for all services like this:

public ServiceBase(ICacheStorage cacheStorage, ILoggingService loggingService, IPerformanceService performanceService)

If all services must inherit this base class then they will start their lives with 3 abstract dependencies that they may not even need. Also, these dependencies don’t represent the true purpose of the services, they are only “sidekicks”.

Ambient context

For a discussion on this type of DI and when and why (not) to use it consult this post.

Interception using the Decorator pattern

The Decorator design pattern can be used as a do-it-yourself interception. The product service class can be reduced to its true purpose:

public class DefaultProductService : IProductService
	{		
		public Product GetProduct(int productId)
		{
			return new Product();
		}
	}

A cached product service might look as follows:

public class CachedProductService : IProductService
{
	private readonly IProductService _innerProductService;
	private readonly ICacheStorage _cacheStorage;

	public CachedProductService(IProductService innerProductService, ICacheStorage cacheStorage)
	{
		if (innerProductService == null) throw new ArgumentNullException("ProductService");
		if (cacheStorage == null) throw new ArgumentNullException("CacheStorage");
		_cacheStorage = cacheStorage;
		_innerProductService = innerProductService;
	}

	public Product GetProduct(int productId)
	{
		string key = "Product|" + productId;
		Product p = _cacheStorage.Retrieve<Product>(key);
		if (p == null)
		{
			p = _innerProductService.GetProduct(productId);
			_cacheStorage.Store(key, p);
		}

		return p;
	}
}

The cached product service itself implements IProductService and accepts another IProductService in its constructor. The injected product service will be used to retrieve the product in case the injected cache service cannot find it.

The consumer can actively use the cached implementation of the IProductService in place of the DefaultProductService class to deliberately call for caching. Here the call to retrieve a product is intercepted by caching. The cached service can concentrate on its task using the injected ICacheStorage object and delegates the actual product retrieval to the injected IProductService class.

You can imagine that it’s possible to write a logging decorator, a performance decorator etc., i.e. a decorator for any type of cross-cutting concern. You can even decorate the decorator to include logging AND caching. Here you see several applications of SOLID. You keep the product service clean so that it adheres to the Single Responsibility Principle. You extend its functionality through the cached product service decorator which is an application of the Open-Closed principle. And obviously injecting the dependencies through abstractions is an example of the Dependency Inversion Principle.

The Decorator is a well-tested pattern to implement interception in a highly flexible object-oriented way. You can implement a lot of decorators for different purposes and you will adhere to SOLID pretty well. However, imagine that in a large business application with hundreds of domains and hundreds of services you may potentially have to write hundreds of decorators. As each decorator executes one thing only to adhere to SRP you may need to implement 3-4 decorators for each service.

That’s a lot of code to write… This is actually a practical limitation of solely using this pattern in a large application to achieve interception: it’s extremely repetitive and time consuming.

Aspect oriented programming (AOP)

The idea behind AOP is strongly related to attributes in .NET. An example of an attribute in .NET is the following:

[PrincipalPermission(SecurityAction.Demand, Role = "Administrator")]
protected void Page_Load(object sender, EventArgs e)
{
}

This is also an example of interception. The PrincipalPermission attribute checks the role of the current principal before the decorated method can continue. In this case the ASP.NET page won’t load unless the principal has the Administrator role. I.e. the call to Page_Load is intercepted by this Security attribute.

The decorator pattern we saw above is an example of imperative coding. The attributes are an example of declarative interception. Applying attributes to declare aspects is a common technique in AOP. Imagine that instead of writing all those decorators by hand you could simply decorate your objects as follows:

[Cached]
[Logged]
[PerformanceChecked]
public class DefaultProductService : IProductService
{		
	public Product GetProduct(int productId)
	{
		return new Product();
	}
}

It looks attractive, right? Well, let’s see.

The PrincipalPermission attribute is special as it’s built into the .NET base class library (BCL) along with some other attributes. .NET understands this attribute and knows how to act upon it. However, there are no built-in attributes for caching and other cross-cutting concerns. So you’d need to implement your own aspects. That’s not too difficult; your custom attribute will need to derive from the System.Attribute base class. You can then decorate your classes with your custom attribute but .NET won’t understand how to act upon it. The code behind your implemented attribute won’t run just like that.

There are commercial products, like PostSharp, that enable you to write attributes that are acted upon. PostSharp carries out its job by modifying your code in the post-compilation step. The “normal” compilation runs first, e.g. by csc.exe and then PostSharp adds its post-compilation step by taking the code behind your custom attribute(s) and injecting it into the code compiled by csc.exe in the correct places.

This sounds enticing. At least it sounded to me like heaven when we tested AOP with PostSharp: we wanted to measure the execution time and save several values about the caller of some very important methods of a service. So we implemented our custom attributes and very extremely proud of ourselves. Well, until someone else on the team started using PostSharp in his own assembly. When I referenced his project in mine I suddenly kept getting these funny notices that I have to activate my PostSharp account… So what’s wrong with those aspects?

  • The code you write will be different from what will be executed as new code will be injected into the compiled one in the post-compilation step. This may be tricky in a debugging session
  • The vendors will be happy to provide helper tools for debugging which may or may not be included in the base price and push you towards an anti-pattern where you depend on certain external vendors – also a form of tight coupling
  • All attributes must have default parameterless constructors – it’s not easy to consume dependencies from within an attribute. Your best bet is using ambient context – or abandon DI and go with default implementations of the dependencies
  • It can be difficult to fine-grain the rules when to apply an aspect. You may want to go with a convention-based applicability such as “apply the aspect on all objects whose name ends with ‘_log'”
  • The aspect itself is not an abstraction; it’s not straightforward to inject different implementations of an aspect – therefore if you decide to go with the System.Runtime.Cache in your attribute implementation then you cannot change your mind afterwards. You cannot implement a factory or any other mechanism to inject a certain aspect in place of some abstract aspect as there’s no such thing

This last point is probably the most serious one. It pulls you towards the dreaded tight-coupling scenario where you cannot easily redistribute a class or a module due to the concrete dependency introduced by an aspect. If you consume such an external library, like in the example I gave you, then you’re stuck with one implementation – and you better make sure you have access to the correct credentials to use that unwanted dependency…

Dynamic interception with a DI container

We briefly mentioned DI containers, or IoC containers in this series. You may be familiar with some of them, such as StructureMap and CastleWindsor. I won’t get into any details regarding those tools. There are numerous tutorials available on the net to get you started. As you get more and more exposed to SOLID in your projects then eventually you’ll most likely become familiar with at least one of them.

Dynamic interception makes use of the ability of .NET to dynamically emit types. Some DI containers enable you to automate the generation of decorators to be emitted straight into a running process.

This approach is fully object-oriented and helps you avoid the shortcomings of AOP attributes listed above. You can register your own decorators with the IoC container, you don’t need to rely on a default one.

If you are new to DI containers then make sure you understand the basics before you go down the dynamic interception route. I won’t show you any code here on how to implement this technique as it depends on the type of IoC container of your choosing. The key steps as far as CastleWindsor is concerned are as follows:

  • Implement the IInterceptor interface for your decorator
  • Register the interceptor with the container
  • Activate the interceptor by implementing the IModelInterceptorsSelector interface – this is the step where you declare when and where the interceptors will be invoked
  • Register the class that implements the IModelInterceptorsSelector interface with the container

Carefully following these steps will ensure that you can implement dynamic interception without the need for attributes. Note that not all IoC containers come with the feature of dynamic interception.

Conclusions

In this mini-series on DI within the series about SOLID I hope to have explained the basics of the Dependency Inversion Principle. This last constituent of SOLID is probably the one that has caused the most controversy and misunderstanding of the 5. Ask 10 developers on the purposes of DIP and you’ll get 11 different answers. You may absolutely have come across ideas in these posts that you disagree with – feel free to comment in that case.

However, I think there are several myths and misunderstandings about DI and DIP that were successfully dismissed:

  • DI is the same as IoC containers: no, IoC containers can automate DI but you can by any means apply DI in your code without a tool like that
  • DI can be solved with factories: look at the post on DI anti-patterns and you’ll laugh at this idea
  • DI requires an IoC containers: see the first point, this is absolutely false
  • DI is only necessary if you want to enable unit testing: no, DI has several advantages as we saw, effective unit testing being only one of them
  • Interception is best done with AOP: no, see above
  • Using an IoC container will automatically result in DI: no, you have to prepare your code according to the DI patterns otherwise an IoC container will have nothing to inject

View the list of posts on Architecture and Patterns here.

SOLID design principles in .NET: the Dependency Inversion Principle Part 3, DI anti-patterns

In the previous post we discussed the various techniques how to implement Dependency Injection. Now it’s time to show how NOT to do DI. Therefore we’ll look at a couple of anti-patterns in this post.

The main source in the series on Dependency Injection is based on the work of Mark Seemann and his great book on DI.

Lack of DI

The most obvious DI anti-patterns is the total absence of it where the class controls it dependencies. It is the most common anti-pattern in DI to see code like this:

public class ProductService
{
	private ProductRepository _productRepository;

	public ProductService()
	{
		_productRepository = new ProductRepository();
	}
}

Here the consumer class, i.e. ProductService, creates an instance of the ProductRepository class with the ‘new’ keyword. Thereby it directly controls the lifetime of the dependency. There’s no attempt to introduce an abstraction and the client has no way of introducing another type – implementation – for the dependency.

.NET languages, and certainly other similar platforms as well, make this extremely easy for the programmer. There is no automatic SOLID checking in Visual Studio, and why would there be such a mechanism? We want to give as much freedom to the programmer as possible, so they can pick – or even mix – C#, VB, F#, C++ etc., write in an object-oriented way or follow some other style, so there’s a high degree of control given to them within the framework. So it feels natural to new up objects as much as we like: if I need something I’ll need to go and get it. If the ProductService needs a product repository then it will need to fetch one. So even experienced programmers who know SOLID and DI inside out can fall into this trap from time to time, simply because it’s easy and programming against abstractions means more work and a higher degree of complexity.

The first step towards salvation may be to declare the private field as an abstract type and mark it as readonly:

public class ProductService
{
	private readonly IProductRepository _productRepository;

	public ProductService()
	{
		_productRepository = new ProductRepository();
	}
}

However, not much is gained here yet, as at runtime _productRepository will always be a new ProductRepository.

A common, but very wrong way of trying to resolve the dependency is by using some Factory. Factories are an extremely popular pattern and they seem to be the solution to just about anything – besides copy/paste of course. I’m half expecting Bruce Willis to save the world in Die Hard 27 by applying the static factory pattern. So no wonder people are trying to solve DI with it too. You can see from the post on factories that they come in 3 forms: abstract, static and concrete.

The “solution” with the concrete factory may look like this:

public class ProductRepositoryFactory
{
	public ProductRepository Create()
	{
		return new ProductRepository();
	}
}
public ProductService()
{
	ProductRepositoryFactory factory = new ProductRepositoryFactory();
	_productRepository = factory.Create();
}

Great, we’re now depending directly on ProductRepositoryFactory and ProductRepository is still hard-coded within the factory. So instead of just one hard dependency we now have two, well done!

What about a static factory?

public class ProductRepositoryFactory
{
	public static ProductRepository Create()
	{
		return new ProductRepository();
	}
}
public ProductService()
{
	_productRepository = ProductRepositoryFactory.Create();
}

We’ve got rid of the ‘new’, yaaay! Well, no, if you recall from the introduction using static methods still creates a hard dependency, so ProductService still depends directly on ProductRepositoryFactory and indirectly on ProductRepository.

To make matters worse the static factory can be misused to produce a sense of freedom to the client to control the type of dependency as follows:

public class ProductRepositoryFactory
{
	public static IProductRepository Create(string repositoryTypeDescription)
	{
		switch (repositoryTypeDescription)
		{
			case "default":
				return new ProductRepository();
			case "test":
				return new TestProductRepository();
			default:
				throw new NotImplementedException();
		}
	}
}
public ProductService()
{
	_productRepository = ProductRepositoryFactory.Create("default");
}

Oh, brother, this is a real mess. ProductService still depends on the ProductRespositoryFactory class, so we haven’t eliminated that. It is now indirectly dependent on the two concrete repo types returned by factory. Also, we now have magic strings flying around. If we ever introduce a third type of repository then we’ll need to revisit the factory and inform all actors that there’s a new magic string. This model is very difficult to extend. The ability to configure in code, or possibly in a config file, gives a false sense of security to the developer.

You can create a mock Product repository for a unit test scenario, extend the switch-case statement in the factory and maybe introduce a new magic string “mock” only for testing purposes. Then you can put “mock” in the Create method, recompile and run your test just for unit testing. Then you forget to put it back to “sql” or whatever and deploy the solution… Realising this you may want to overload the ProductService constructor like this:

public ProductService(string productRepoDescription)
{
	_productRepository = ProductRepositoryFactory.Create(productRepoDescription);
}

That only moves the magic string problem up the stacktrace but does nothing to solve the dependency problems outlined above.

Let’s look at an abstract factory:

public interface IProductRepositoryFactory
{
	IProductRepository Create(string repoDescription);
}

ProductRepositoryFactory can implement this interface:

public class ProductRepositoryFactory : IProductRepositoryFactory
{
	public IProductRepository Create(string repositoryTypeDescription)
	{
		switch (repositoryTypeDescription)
		{
			case "default":
				return new ProductRepository();
			case "test":
				return new TestProductRepository();
			default:
				throw new NotImplementedException();
		}
	}
}

You can use it from ProductService as follows:

private IProductRepositoryFactory _productRepositoryFactory;
private IProductRepository _productRepository;

public ProductService(string productRepoDescription)
{
	_productRepositoryFactory = new ProductRepositoryFactory();
	_productRepository = _productRepositoryFactory.Create(productRepoDescription);
}

We need to new up a ProductRepositoryFactory, so we’re back at square one. However, abstract factory is still the least harmful of the factory patterns when trying to solve DI as we can refactor this code in the following way:

private readonly IProductRepository _productRepository;

public ProductService(string productRepoDescription, IProductRepositoryFactory productRepositoryFactory)
{
	_productRepository = productRepositoryFactory.Create(productRepoDescription);
}

This is not THAT bad. We can provide any type of factory now but we still need to provide a magic string. A way of getting rid of that string would be to create specialised methods within the factory as follows:

public interface IProductRepositoryFactory
{
	IProductRepository Create(string repoDescription);
	IProductRepository CreateTestRepository();
	IProductRepository CreateSqlRepository();
	IProductRepository CreateMongoDbRepository();
}

…with ProductService using it like this:

private readonly IProductRepository _productRepository;

public ProductService(IProductRepositoryFactory productRepositoryFactory)
{
	_productRepository = productRepositoryFactory.CreateMongoDbRepository();
}

This is starting to look like proper DI. We can inject our own version of the factory and then pick the method that returns the necessary repository type. However, this is a false positive impression again. The ProductService still controls the type of repository returned by the factory. If we wanted to test the SQL repo then we have to revisit the product service – and all other services that need a repository – and select CreateSqlRepository() instead. Same goes for unit testing. You can certainly create a mock repository factory but you’ll need to make sure that the mock implementation returns mock objects for all repository types the factory returns. That breaks ISP in SOLID.

No, even in the above case the caller cannot control type of IProductRepository used within ProductRepository. You can certainly control the concrete implementation of IProductRepositoryFactory, but that’s not enough.

Conclusion: factories are great for purposes other than DI. Use one of the strategies outlined in the previous post.

Lack of DI creates tightly coupled classes where one cannot exist without the other. You cannot redistribute the ProductService class without the concrete ProductRepository so it diminishes re-usability.

Overloaded constructors

Consider the following code:

private readonly IProductRepository _productRepository;

public ProductService() : this(new ProductRepository())
{}

public ProductService(IProductRepository productRepository)
{
        _productRepository = productRepository;
}

It’s great that we have can inject an IProductRepository but what’s the default constructor doing there? It simply calls the overloaded one with a concrete implementation of the repository interface. There you are, we’ve just introduced a completely unnecessary coupling. Matters get worse if the default implementation comes from an external source such as a factory seen above:

private readonly IProductRepository _productRepository;

public ProductService() : this(new ProductRepositoryFactory().Create("sql"))
{}

public ProductService(IProductRepository productRepository)
{
	_productRepository = productRepository;
}

By now you know why factories are not suitable for solving DI so I won’t repeat myself. Even if clients always call the overloaded constructor the class still cannot exist without either the ProductRepository or the ProductRepositoryFactory class.

The solution is easy: get rid of the default constructor and force clients to provide their own implementations.

Service locator

I’ve already written a post on this available here, so I won’t repeat the whole post. In short: a Service Locator resembles a proper IoC container such as StructureMap but it introduces easily avoidable couplings between objects.

Conclusion

We’ve discussed some of the ways how not to do DI. There may certainly be more of those but these are probably the most frequent ones. The most important variant to get rid of is the lack of DI which is exacerbated by the use of factories. That’s also the easiest to spot – look for the ‘new’ keyword in conjunction with dependencies.

The use of static method and properties can also be indicators of DIP violation:

DateTime.Now.ToString();
DataAccess.SaveCustomer(customer);
ProductRepositoryFactory.Create("sql");

We’ve seen especially in the case of factories that static methods and factories only place the dependencies one step down the execution ladder. Static methods are acceptable if they don’t themselves have concrete dependencies but only use the parameters already used by the object that’s calling the static method. However, if those static methods new up other dependencies which in turn may instantiate their own dependencies then that will quickly become a tightly coupled nightmare.

View the list of posts on Architecture and Patterns here.

SOLID design principles in .NET: the Dependency Inversion Principle Part 2, DI patterns

In the previous post we went through the basics of DIP and DI. We’ll continue the discussion with the following questions: how to implement DI?

The main source in the series on Dependency Injection is based on the work of Mark Seemann and his great book on DI.

Flavours of DI

I hinted at the different forms of DI in the previous post. By far the most common form is called Constructor Injection:

public ProductService(IProductRepository productRepository)
{
	_productRepository = productRepository;
}

Constructor injection is the perfect way to ensure that the necessary dependency is always available to the class. We force the clients to supply some kind of implementation of the interface. All you need is a public constructor that accepts the dependency as a parameter. In the constructor we save the incoming concrete class for later use, i.e. assign to a private variable.

We can introduce a second level of security by introducing a guard clause:

public ProductService(IProductRepository productRepository)
{
       if (productRepository == null) throw new ArgumentNullException("ProductRepo");
	_productRepository = productRepository;
}

It is good practice to mark the private field readonly, in this case the _productRepository, as it guarantees that once the initialisation logic of the constructor has executed it cannot be modified by any other code:

private readonly IProductRepository _productRepository;

This is not required for DI to work but it protects you against mistakes such as setting the value to null someplace else in your code.

Constructor injection is a good way to document your class to the clients. The clients will see that ProductService will need an IProductRepository to perform its job. There’s no attempt to hide this fact.

When the constructor is finished then the object is guaranteed to have a valid dependency, i.e. the object is in a consistent state. The following method will not choke on a null value where _productRepository is called:

public IEnumerable<Product> GetProducts()
{
	IEnumerable<Product> productsFromDataStore = _productRepository.FindAll();	
	return productsFromDataStore;
}

We don’t need to test for null within the GetProducts method as we know it is guaranteed to be in a consistent state.

Constructor injection should be your default DI technique in case your class has a dependency and no reasonable local defaults exist. A local default is an acceptable local implementation of an abstract dependency in case none is injected by the client. We’ll talk more about this a bit later. Also, try to avoid overloaded constructors because then you’ll need to rely on those local defaults if an empty constructor is used. In addition, having just one constructor greatly simplifies the usage of automated DI containers, also called IoC containers. If you don’t know what they are, then don’t worry about them for now. They are not a prerequisite for DIP and DI and they cannot magically turn tightly coupled code into a loosely coupled one. Make sure that you first understand how to do DI without such tools.

You can read briefly about them here. In short they are ‘magic’ tools that can initialise dependencies. In the above example you can configure such a container, such as StructureMap to magically inject a MyDefaultProductRepository when it sees that an IProductRepository is needed.

The next DI type in line is called Property injection. Property injection is used when your class has a good local default for a dependency but still want to enable clients to override that default. It is also called Setter injection:

public IProductRepository ProductRepository { get; set; }

In this case you obviously cannot mark the backing field readonly.

This implementation is fragile as by default all object types are set to null so ProductRepository will also be null. You’ll need to extend the property setter as follows:

public IProductRepository ProductRepository 
{
	get
	{
		return _productRepository;
	}
	set
	{
		if (value == null) throw new ArgumentNullException("ProductRepo");
		_productRepository = value;
	}
}

We’re still not done. There’s nothing forcing the client to call this setter so the GetProducts method will throw an exception. At some point we must initialise the dependency, maybe in the constructor:

public ProductService()
{
	_productRepository = new DefaultProductRepository();
}

By now we know that initialising an object with the ‘new’ keyword increases coupling between two objects so whenever possible accept the dependency in form of an abstraction instead.

Alternatively you can take the Lazy Initialisation approach in the property getter:

public IProductRepository ProductRepository 
{
	get
	{
		if (_productRepository == null)
		{
			_productRepository = new ProductRepository();
		}
		return _productRepository;
	}
	set
	{
		if (value == null) throw new ArgumentNullException("ProductRepo");
		_productRepository = value;
	}
}

We can go even further. If you want that the client should only set the dependency once without the possibility to alter it during the object’s lifetime then the following approach can work:

public IProductRepository ProductRepository 
{
	get
	{
		if (_productRepository == null)
		{
			_productRepository = new ProductRepository();
		}
		return _productRepository;
	}
	set
	{
		if (value == null) throw new ArgumentNullException("ProductRepo");
		if (_productRepository != null) throw new InvalidOperationException("You are not allowed to set this dependency more than once.");
		_productRepository = value;
	}
}

BTW this doesn’t mean that from now on you must never again use the new keyword to initialise objects. At some point you’ll HAVE TO use it as you can’t connect the bits and pieces using abstractions only. E.g. this is invalid code:

ProductService ps = new ProductService(new IProductRepository());

No, unless you have some IoC Container in place you’ll need to go with what’s called the poor man’s dependency injection:

ProductService ps = new ProductService(new DefaultProductRepository());

Where DefaultProductRepository implements IProductRepository.

Coming back to Property injection, you can use it whenever there’s a well functioning default implementation of a dependency but you still want to let your users provide their own implementation. In other words the dependency is optional. However, this choice must be a conscious one. As there’s nothing forcing the clients to call the property setter your code shouldn’t complain if the dependency is null when it’s needed. That’s a sign saying that you need to turn to constructor injection instead. In case you don’t really need a local default then the Null Object pattern can come handy. If the local default is part of the .NET base class library then it may be an acceptable approach to use it.

As you see the last version of the property getter-setter is considerably more complex than taking the constructor injection approach. It looks easy at first, just insert a get;set; type of property but you soon notice that it’s a fragile structure.

A third way of doing DI is called method injection. It is used when we want to ensure that we can inject a different implementation every time the dependency is used:

public IEnumerable<Product> GetProducts(IProductDiscountStrategy productDiscount)
{
	IEnumerable<Product> productsFromDataStore = _productRepository.FindAll();
	foreach (Product p in productsFromDataStore)
	{
		p.AdjustPrice(productDiscount);
	}
	return productsFromDataStore;
}

Here we apply a discount strategy on each product in the iteration. The actual strategy may change a lot, it can depend on the season, the loyalty scheme, the weather, the time of the day etc. As usual you can include a guard clause:

public IEnumerable<Product> GetProducts(IProductDiscountStrategy productDiscount)
{
	if (productDiscount == null) throw new ArgumentNullException("Discount strategy");
	IEnumerable<Product> productsFromDataStore = _productRepository.FindAll();
	foreach (Product p in productsFromDataStore)
	{
		p.AdjustPrice(productDiscount);
	}
	return productsFromDataStore;
}

In this approach it’s easy to vary the concrete discount strategy every time we call the GetProducts method. If you had to inject this into the constructor then you’d need to create a new ProductService class every time you want to apply some pricing strategy. In case the method doesn’t use the injected dependency you won’t need a guard clause. This may sound strange at first; why have a dependency in the signature if it is not used in the method body? Occasionally you’re forced to implement an interface where the interface method defines the dependency – more on that in the post about the Interface Segregation Principle.

Patterns related to method injection are Factory and Strategy. Choosing the proper implementation to be injected into the GetProducts method will almost certainly depend on other inputs such as the choices the user makes on the UI. These patterns will help you solve that problem.

An example of method injection from .NET is the Contains extension method from LINQ:

bool IEnumerable<T>.Contains(T object, IEqualityComparer<T> comparer);

You can then provide your own version of the equality comparer.

The last type of DI we’ll discuss is making dependencies available through a static accessor. It is also called injection through the ambient context. It is used when implementing cross-cutting concerns.

What are cross-cutting concerns? Operations that may be performed in many unrelated classes and that are not closely related to those classes. A classic example is logging. You may want to log exceptions, performance data etc. I may want to log the time it takes the GetProducts method to return in the ProductService class. You could inject an ILogger interface to every class that needs logging but it introduces a large amount of pollution across your objects. Also, an ILogger interface attached to a constructor is not truly necessary for the dependent object to perform its real job. Logging is usually not part of the core job of services and repositories.

If you’re a web developer then you must have come across the HttpContext object at some point. It is an application of the ambient context pattern. You can always try and access that object and get hold of the current HTTP context which may or may not be available right there and then by its Current static property. You can construct your own Context object where you retrieve the current context using threads. A time provider context may look as follows:

public abstract class TimeProviderContext
{
	public static TimeProviderContext Current
	{
		get
		{
			TimeProviderContext timeProviderContext = Thread.GetData(Thread.GetNamedDataSlot("TimeProvider")) as TimeProviderContext;
			if (timeProviderContext == null)
			{
				timeProviderContext = TimeProviderContext.DefaultTimeProviderContext;
				Thread.SetData(Thread.GetNamedDataSlot("TimeProvider"), timeProviderContext);
			}
			return timeProviderContext;
		}
		set
		{
			Thread.SetData(Thread.GetNamedDataSlot("TimeProvider"), value);
		}
	}
	public static TimeProviderContext DefaultTimeProviderContext = new DotNetTimeProvider();

	public abstract DateTime GetTime { get; }
}

…where DotNetTimeProvider looks as follows:

public class DotNetTimeProvider : TimeProviderContext
{
	public override DateTime GetTime
	{
		get { return DateTime.Now; }
	}
}

The TimeProviderContext is abstract and has a static Current property to get hold of the current context. That’s the classic setup of the ambient context. Using the Thread object like that will ensure that each thread has its own context. There’s a default implementation which is the standard DateTime class from .NET. It’s important to note that the Current property must be writable so that clients can assign their own time providers by deriving from the TimeProviderContext object. This can be helpful in unit testing where ideally you want to control the time instead of waiting for some specific date. The local default makes sure that the client doesn’t get a null reference exception when calling the Current property.

For simplicity’s sake I only put a single abstract method in this class to get the date but ambient context classes can provide as many properties as you need.

You can then use this context in client classes as follows:

public DateTime TestTime()
{
	return TimeProviderContext.Current.GetTime;
}

Ambient context should be used with care. Use it only if you want to make sure that a cross-cutting concern is available anywhere throughout your application. In that case it may be futile to force objects to take on dependencies they may not need now but might do so at some point in the future:

public IProductRepository ProductRepository 
{
	get
	{
		if (_productRepository == null)
		{
			_productRepository = new ProductRepository(TimeProviderContext.Current);
		}
		return _productRepository;
	}
}

…where ProductRepository looks as follows:

public class ProductRepository : IProductRepository
{	

	public ProductRepository(TimeProviderContext timeProviderContext)
	{
		//do nothing with the time provider context right now but it may be needed later
	}

	public IEnumerable<Product> FindAll()
	{
		return new List<Product>();
	}
}

That only increases the coupling between objects and pollutes your object structure.

A disadvantage with the ambient context approach is that the consuming class carries an implicit dependency. In other words it hides from the clients that it needs a time provider in order to perform its job. If you put the TestTime() method shown above in the ProductService class then there’s no way for the client to tell just by looking at the interface that ProductService uses this dependency. Also, callers of the TestTime method will get different results depending on the actual context and it may not be transparent to them why this happens without looking at the source code.

There’s actually a technical term to describe this “openness” that Eric Evans came up with: Intention-revealing interfaces. An API should communicate what it does by its public interface alone. A class using the ambient context does exactly the opposite. Clients may know of the existence of a TimeProviderContext class but will probably not know that it is used in ProductService.

There are cross cutting concerns that only perform some task without returning an answer: logging exceptions and performance data are such cases. If you have that type of scenario then a technique called Interception is something to consider. We’ll look at interception briefly it the blog post after the next one.

Conclusion

Of the four techniques discussed your default choice should always be Constructor Injection in case there’s a dependency within your class. If the dependency varies from operation to operation then Method Injection is good candidate.

Then you can ask the question if the dependency represents a cross-cutting concern. If not and a good local default exists then you can go down the Property Injection path. Otherwise if you need some return value from the cross-cutting concern dependency and you have a good local default in case Context.Current is null then Ambient Context can help. Else if this dependency only has void methods then take a look at Interception.

When in doubt, especially if you are trying to pick a strategy between Constructor Injection and another variant then pick Constructor injection. Things can never go fatally wrong with that option.

View the list of posts on Architecture and Patterns here.

SOLID design principles in .NET: the Dependency Inversion Principle and the Dependency Injection pattern

Introduction

The Dependency Inversion Principle (DIP) helps to decouple your code by ensuring that you depend on abstractions rather than concrete implementations. Dependency Injection (DI) is an implementation of this principle. In fact DI and DIP are often used to mean the same thing. A key feature of DIP is programming to abstractions so that consuming classes can depend on those abstractions rather than low-level implementations. DI is the act of supplying all classes that a service needs rather than letting the service obtain its concrete dependencies. The term Dependency Injection may first sound like some very advanced technology but in reality there are absolutely no complex techniques behind it. As long as you understand abstractions and constructors and methods that can accept parameters you’ll understand DI as well.

Another related term is Inversion of Control, IoC. IoC is an older term than DI. Originally it meant a programming style where a framework or runtime controlled the programme flow. Sticking to this definition software that is developed with .NET uses IoC – in this case .NET is the controlling framework. You hook up to its events, lifetime management etc. You are in control of your methods and references but .NET provides the ultimate glue. Nowadays we’re so used to working with frameworks that we don’t care – we’re actually happy that we don’t need to worry about tedious infrastructure stuff. With time IoC drifted away to mean Inversion of Control Containers which are mechanisms that control dependencies. Martin Fowler was the one who came up with the term Dependency Injection to mean this particular flavour of IoC. Thus, DI is still the correct term to describe the control over dependencies although people often use IoC instead.

DIP is a large topic so it will span several posts to discuss it thoroughly. However, one important conclusion up front is the following:

The frequency of the ‘new’ keyword in your code is a rough estimate of the degree of coupling in your object structure.

A side note: in the demo I’ll concentrate on interfaces but DI works equally well with abstract classes.

The main source in the series on Dependency Injection is based on the work of Mark Seemann and his great book on DI.

What are dependencies?

Dependencies can come in different forms.

A framework dependency is your choice of the development framework, such as .NET. You as a .NET developer are probably comfortable with that dependency as it’s unlikely to change during the product’s lifetime. Most often if you need the product to run on a different framework then it will be rewritten for that platform in a different language without discarding the original product. As this dependency is very unlikely to change and it’s a very high level dependency we don’t worry about it in this context.

Third party libraries introduce a lower level dependency that may well change over time. If a class depends on an external dll then that class may be difficult to test as we need that library to be in a consistent state when testing. However, some of those dependencies may never change. E.g. if I want to communicate with the Amazon cloud in code then it’s best to download and reference the Amazon .NET SDK and use that for entire lifetime of the product. It’s unlikely that I will write my own SDK to communicate with the Amazon web services.

Databases store your data and they can come in many different forms: SQL Server, MySQL, MongoDb, RavenDb, Oracle etc. You may think that an application will never change its storage mechanism, but you’ll better be prepared and abstract it away behind a repository as we’ll see in the demo.

Some other less obvious dependencies include the File System, Emails, Web services and other networking technologies.

System resources such as the Clock to get the current time. You may think that’s unnecessary to abstract that away. After all, why would you ever write your own time provider if the System Clock is available? Think of unit testing a method that depends on time: a price discount is given between 5pm and 6pm on every Friday. How would you unit test that logic? Do you really wait until 5pm on a Friday and hope that you can make the test pass before 6pm? That’s not too clever, as your unit test can only run during that time. So there’s a valid point in abstracting away system resources as well.

Configuration in general, such as the app settings you read from the config file.

The new keyword, as hinted at in the introduction, generally points to tighter coupling and an introduction of an extra dependency. As soon as you write var o = new MyObject() in your class MyClass then MyClass will be closely dependent on MyObject(). If MyObject() changes its behaviour then you have to be prepared for changes and unexpected behaviour in MyClass as well. Using static methods is no way out as your code will depend on the object where the static method is stored, such as MyFactory.GetObject(). You haven’t newed up a MyFactory object but your code is still dependent on it.

Depending on threading-related objects such as Thread or Task can make your methods difficult to test as well. If there’s a call to Thread.Sleep then even your unit test will need to wait.

Anytime you introduce a new library reference you take on an extra dependency. If you have a 4 layers in your solution, a UI, Services, Domains and Repository then you can glue them together by referencing them in Visual Studio: UI uses Services, Services use the Repo etc. The degree of coupling is closely related to how painful it is to replace those layers with new ones. Do you have to sit for days in front of your computer trying to reconnect all the broken links or does it go relatively fast as all you need to do is inject a different implementation of an abstraction?

Getting random numbers using the Random object also introduces an extra dependency that’s hard to test. If your code depends on random numbers then you may have to run the unit test many times before you get to test all branches. Instead, you can provide a different implementation of generating random numbers where you control the outcome of this mechanism so that you can easily test your code.

I’ve provided a hint on how to abstract away built-in .NET objects in your code at the end of this post.

Demo

In the demo we’ll use a simple Product domain whose price can be adjusted using a discount strategy. The Products will be retrieved using a ProductRepository. A ProductService class will depend on ProductRepository to communicate with the underlying data store. We’ll first build the classes without DIP in mind and then we’ll refactor the code. We’ll keep all classes to a minimum without any real implementation so that we can concentrate on the issues at hand. Open Visual Studio and create a new console application. Add the following Product domain class:

public class Product
{
	public void AdjustPrice(ProductDiscount productDiscount)
	{
	}
}

…where ProductDiscount looks even more simple:

public class ProductDiscount
{
}

ProductRepository will help us communicate with the data store:

public class ProductRepository
{
	public IEnumerable<Product> FindAll()
	{
		return new List<Product>();
	}
}

We connect the above objects in the ProductService class:

public class ProductService
{
	private ProductDiscount _productDiscount;
	private ProductRepository _productRepository;

	public ProductService()
	{
		_productDiscount = new ProductDiscount();
		_productRepository = new ProductRepository();
	}

	public IEnumerable<Product> GetProducts()
	{
		IEnumerable<Product> productsFromDataStore = _productRepository.FindAll();
		foreach (Product p in productsFromDataStore)
		{
			p.AdjustPrice(_productDiscount);
		}
		return productsFromDataStore;
	}
}

A lot of code is still written like that nowadays. This is the traditional approach in programming: high level modules call low level modules and instantiate their dependencies as they need them. Here ProductService calls ProductRepository, but before it can do that it needs to new one up using the ‘new’ keyword. The client, i.e. the ProductService class must fetch the dependencies it needs in order to carry out its tasks. Two dependencies are created in the constructor with the ‘new’ keyword. This breaks the Single Responsibility Principle as the class is forced to carry out work that’s not really its concern.

The ProductService is thus tightly coupled to those two concrete classes. It is difficult to use different discount strategies: there may be different discounts at Christmas, Halloween, Easter, New Year’s Day etc. Also, there may be different strategies to retrieve the data from the data store such as using an SQL database, a MySql database, a MongoDb database, file storage, memory storage etc. Whenever those strategies change you must update the ProductService class which breaks just about every principle we’ve seen so far in this series on SOLID.

It is also difficult to test the product service in isolation. The test must make sure that the ProductDiscount and ProductRepository objects are in a valid state and perform as they are expected so that the test result does not depend on them. If the ProductService sends an email then even the unit test call must send an email in order for the test to pass. If the emailing service is not available when the test runs then your test will fail regardless of the true business logic of the method under test.

All in all it would be easier if we could provide any kind of strategy to the ProductService class without having to change its implementation. This is where abstractions and DI enters the scene.

As we can have different strategies for price discounts and data store engines we’ll need to introduce abstractions for them:

public interface IProductDiscountStrategy
{
}
public interface IProductRepository
{
	IEnumerable<Product> FindAll();
}

Have the discount and repo classes implement these interfaces:

public class ProductDiscount : IProductDiscountStrategy
{
}
public class ProductRepository : IProductRepository
{
	public IEnumerable<Product> FindAll()
	{
		return new List<Product>();
	}
}

When we program against abstractions like that then we introduce a seam into the application. It comes from seams on cloths where they can be sewn together. Think of the teeth – or whatever they are called – on LEGO building blocks. You can mix and match those building blocks pretty much as you like due to the standard seams they have. In fact LEGO applied DIP and DI pretty well in their business idea.

Let’s update the ProductService class step by step. The first step is to change the type of the private fields:

private IProductDiscountStrategy _productDiscount;
private IProductRepository _productRepository;

You’ll see that the AdjustPrice method of Product now breaks so we’ll update it too:

public void AdjustPrice(IProductDiscountStrategy productDiscount)
{
}

Now the Product class can accept any type of product discount so we’re on the right track. See that the AdjustPrice accepts a parameter of an abstract type? We’ll look at the different flavours of DI later but that’s actually an application of the pattern called method injection. We inject the dependency through a method parameter. We’ll employ constructor injection to remove the hard dependency on the ProductRepository class within ProductService:

public ProductService(IProductRepository productRepository)
{
	_productDiscount = new ProductDiscount();
	_productRepository = productRepository;
}

Here’s the final version of the ProductService class:

public class ProductService
{
	private IProductRepository _productRepository;

	public ProductService(IProductRepository productRepository)
	{
		_productRepository = productRepository;
	}

	public IEnumerable<Product> GetProducts(IProductDiscountStrategy productDiscount)
	{
		IEnumerable<Product> productsFromDataStore = _productRepository.FindAll();
		foreach (Product p in productsFromDataStore)
		{
			p.AdjustPrice(productDiscount);
		}
		return productsFromDataStore;
	}
}

Now clients of ProductService, possibly a ProductController in an MVC app, will need to provide these dependencies so that the ProductService class can concentrate on its job rather than having to new up dependencies. Obtaining the correct pricing and data store strategy should not be the responsibility of the ProductService. This relates well to the Open-Closed Principle in that you can create new pricing strategies without having to update the ProductService class.

It’s important to note that the ProductService class is now honest and transparent. It is honest about its needs and doesn’t try to hide the external objects it requires in order to fulfil its job, i.e. its dependencies are explicit. Clients using the ProductService class will know that it requires a discount strategy and a product repository. There are no hidden side effects and unexpected results.

The opposite case is a class having implicit dependencies such as the first version of the ProductService class. It’s not obvious for the caller that ProductService will use external libraries and it does not have any control whatsoever over them. In the best case if you have access to the code then you can inspect it and maybe even refactor it. The first version of ProductService tells the client that it suffices to simply create a new ProductService object and then it will be able to fetch the products. However, what do we do if the database is not present? Or if the prices are not correct due to the wrong discount strategy? Then comes the part where ProductService says ‘oh, sorry, I need database access before I can do anything, didn’t you know that…?”.

Imagine that you get a new job as a programmer. Normally all ‘dependencies’ that you need for your job are provided for you: a desk, a computer, the programming software etc. Now think how this would work without DI: you’ll have to buy a desk, get a computer within some specified price range, install the software yourself etc. That is the opposite of DI and that is how ProduceService worked before the refactoring.

A related pattern to achieve DI is the Adapter pattern. It provides a simple mechanism to abstract away dependencies that you have no control over, such as the built-in .NET classes. E.g. if your class sends emails then your default choice could be to use the built-in .NET emailing objects, such as MailMessage and SmtpClient. You cannot easily abstract away that dependency as you have no access to the source code, you cannot make it implement a custom interface, such as IEmailingService. The Adapter patter will help you solve that problem. Also, unit testing code that sends emails is cumbersome as even the test call will need to send out an email. Ideally we don’t test those external services so instead inject a mock object in place of the real one. How that is done? Start here.

View the list of posts on Architecture and Patterns here.

SOLID design principles in .NET: the Interface Segregation Principle

Introduction

The Interface Segregation Principle (ISP) states that clients should not be forced to depend on interfaces they do not use. ISP is about breaking down big fat master-interfaces to more specialised and cohesive ones that group related functionality. Imagine that your class needs some functionality from an interface but not all. If you have no direct access to that interface then all you can do is to implement the relevant bits and ignore the methods that are not relevant for your class.

The opposite is that an interface grows to allow for more functionality where there’s a chance that some methods in the interface are not related, e.g. CalculateArea() and DoTheShopping(). All implementing classes will of course need to implement all methods in the interface where the unnecessary ones may only have a throw new NotImplementedException as their method body.

For the caller this won’t be obvious:

Person.DoTheShopping();
Neighbour.DoTheShopping();

Triangle.CalculateArea();
Square.CalculateArea();

These look perfectly reasonable, right? However, if the interface they implement looks like this…:

public interface IMasterInterface
{
    void DoTheShopping();
    double CalculateArea();
}

…then the client may as well make the following method calls:

Person.CalculateArea();
Triangle.DoTheShopping();

…where the true implementations will probably throw a NotImplementedException or will have an empty body or return some default value such as 0 at best. Also, if you loop through an enumeration of IMasterInterface objects then the caller expects to be able to simply call the CalculateArea() on every object in the list adhering to the LSP principle of the previous post. Then they will see exceptions thrown or inexplicable 0’s returned where the method does not apply.

The above example is probably too extreme, nobody would do anything like that in real code I hope. However, there are more subtle differences between two ‘almost related’ objects than between a Person and a Triangle as we’ll see in the demo.

Often such master interfaces are a result of a growing domain where more and more properties and behaviour are assigned to the domain objects without giving much thought to the true hierarchy between classes. Adhering to ISP will often result in small interfaces with only 1-3 methods that concentrate on some very specific tasks. This helps you create subclasses that only implement those interfaces that are meaningful to them.

Big fat interfaces – and abstract base classes for that matter – therefore introduce unnecessary dependencies. Unnecessary dependencies in turn reduce the cohesion, flexibility and maintainability of your classes and increase the coupling between dependencies.

Demo

We’ll simulate a movie rental where movies can be rented in two formats: DVD and BluRay. The domain expert correctly recognises that these are indeed related products that both can implement the IProduct interface. Open Visual Studio, create a new Console app and add the following interface:

public interface IProduct
{
	decimal Price { get; set; }
	double Weight { get; set; }
	int Stock { get; set; }
        int AgeLimit { get; set; }
	int RunningTime { get; set; }
}

Here I take the interface approach but it’s perfectly reasonable to follow an approach based on an abstract base class instead, such as ProductBase.

The following two classes implement the interface:

public class DVD : IProduct
{
	public decimal Price { get; set; }
	public double Weight { get; set; }
	public int Stock { get; set; }
        int AgeLimit { get; set; }
	public int RunningTime { get; set; }
}
public class BluRay : IProduct
{
	public decimal Price { get; set; }
	public double Weight { get; set; }
	public int Stock { get; set; }
        int AgeLimit { get; set; }
	public int RunningTime { get; set; }
}

All is well so far, right? Now management comes along and says that they want to start selling T-Shirts with movie stars printed on them so the new product is more or less related to the main business. The programmer first thinks it’s probably just OK to implement the IProduct interface again:

public class TShirt : IProduct
{
	public decimal Price { get; set; }
	public double Weight { get; set; }
	public int Stock { get; set; }
        int AgeLimit { get; set; }
	public int RunningTime { get; set; }
}

Price, stock and weight are definitely relevant to the TShirt domain. Age limit? Not so sure… Running time? Definitely not. However, it’s still there and can be set even if it doesn’t make any sense. Also, the programmer may want to extend the TShirt class with the Size property so he adds this property to the IProduct interface. The DVD and BluRay classes will then also need to implement this property, but does it make sense to store the size of a DVD? Well, maybe it does, but it’s not the same thing as the size of a TShirt. The TShirt size may be a string, such as “XL” or an enumeration, e.g. Size.XL, but the size of a DVD is more complex. It is more likely to be a dimension with 3 values: width, length and depth. So the type of the Size property will be different. Setting the type to Object is a quick fix but it’s a horrendous crime against OOP I think.

The solution is to break up the IProduct interface into two smaller ones:

public interface IProduct
{
	decimal Price { get; set; }
	double Weight { get; set; }
	int Stock { get; set; }
}

The IProduct interface still fits the DVD and BluRay objects and it fits the new TShirt interface as well. The RunningTime property is only relevant to movie-related objects:

public interface IMovie
{
	int RunningTime { get; set; }
}

BluRay and DVD implement both interfaces:

public class BluRay : IProduct, IMovie
{
	public decimal Price { get; set; }
	public double Weight { get; set; }
	public int Stock { get; set; }
	public int RunningTime { get; set; }
}
public class DVD : IProduct, IMovie
{
	public decimal Price { get; set; }
	public double Weight { get; set; }
	public int Stock { get; set; }
	public int RunningTime { get; set; }
}

Now you can have a list of IProduct objects and call the Price property on all of them. You won’t have access to the RunningTime property which is a good thing. By default it will be 0 for all TShirt objects so you could probably get away with that in this specific scenario, but it still goes against ISP. As soon as you’re dealing with IMovie objects then the RunningTime property makes perfect sense.

A slightly different solution after breaking out the IMovie interface is that IMovie implements the IProduct interface:

public interface IMovie : IProduct
{
	int RunningTime { get; set; }
}

We can change the implementing objects as follows:

public class TShirt : IProduct
{
	public decimal Price { get; set; }
	public double Weight { get; set; }
	public int Stock { get; set; }
}
public class BluRay : IMovie
{
	public decimal Price { get; set; }
	public double Weight { get; set; }
	public int Stock { get; set; }
	public int RunningTime { get; set; }
}
public class DVD : IMovie
{
	public double Weight { get; set; }
	public int Stock { get; set; }
	public int RunningTime { get; set; }
	public decimal Price { get; set; }
}

So the DVD and BluRay classes are still of IProduct types as well because the IMovie interface implements it.

A real life example

If you’ve worked with ASP.NET then you must have come across the MembershipProvider object in the System.Web.Security namespace. It is used within .NET for built-in membership providers such as SqlMembershipProvider which provides functions to create new users, authenticate your users, lock them out, update their passwords and much more. If the built-in membership types are not suitable for your purposes then you can create your own custom membership mechanism by implementing the MembershipProvider abstraction:

public class CustomMembershipProvider : MembershipProvider

I won’t show the implemented version because it’s too long. MembershipProvider forces you to implement a whole range of methods, e.g.:

public override bool ChangePassword(string username, string oldPassword, string newPassword)
public override bool DeleteUser(string username, bool deleteAllRelatedData)
public override bool EnablePasswordRetrieval
public override int MinRequiredPasswordLength

At the time of writing this post MembershipProvider includes 27 methods and properties for you to implement. It has been criticised due to its size and the fact that if you only need half of the methods then the rest will throw the dreaded NotImplementedException. If you’re building a custom Login control then all you might be interested in is the following method:

public override bool ValidateUser(string username, string password)

At best you can provide some null objects to quietly ignore the unnecessary methods but it may be confusing to the callers that expect a reasonable result from a method.

An alternative solution in cases where you don’t own the code is to follow the Adapter pattern to hide the unnecessary parts in the original abstraction.

View the list of posts on Architecture and Patterns here.

SOLID design principles in .NET: the Liskov Substitution Principle

Introduction

After visiting the letters ‘S‘ and ‘O‘ in SOLID it’s time to discuss what ‘L’ has to offer. L stands for the Liskov Substitution Principle (LSP) and states that you should be able to use any derived class in place of a parent class and have it behave in the same manner without modification. It ensures that a derived class does not affect the behaviour of the parent class, i.e. that a derived class must be substitutable for its base class.

The principle is named after Barbara Liskov who first described the problem in 1988.

More specifically substitutability means that a caller that communicates with an abstraction, i.e. a base class or an interface, should not be aware of and should not be concerned with the different concrete types of those abstractions. The client should be able to call BaseClass.DoSomething() and get a perfectly usable answer regardless of what the concrete class is in place of BaseClass. For this to work the derived class must also “behave well”, meaning:

  • They must not remove any base class behaviour
  • They must not violate base class invariants, i.e. the rules and constraints of a class, in order to preserve its integrity

The first point means the following: if a base class defines two abstract methods then a derived class must give meaningful implementations of both. If a derived class implements a method with ‘throw new NotImplementedException’ then it means that the derived class is not fully substitutable for its base class. It is a sign that the base class is ‘NOT-REALLY-A’ base class type. In that case you’ll probably need to reconsider your class hierarchy.

All who study OOP must at some point come across the ‘IS-A’ relationship between a base class and a derived class: a Dog is an Animal, a Clerk is an Employee which is a Person, a Car is a vehicle etc. LSP refines this relationship with ‘IS-SUBSTITUTABLE-FOR’, meaning that an object is substitutable with another object in all situations without running into exceptions and unexpected behaviour.

Demo

As usual in this series on SOLID we’ll start with some code which violates LSP. We’ll then see why it’s bad and then correct it. The demo is loosely connected to the one we worked on in the SRP and OCP posts: an e-commerce application that can refund your money in case you send back the item(s) you purchased. At this company you can pay using different services such as PayPal. Consequently the refund will happen through the same service as well.

Open Visual Studio and create a new console application. We’ll start off with an enumeration of the payment services:

public enum PaymentServiceType
{
	PayPal = 1
	, WorldPay = 2
}

It would be great to explore the true web services these companies have to offer to the public but the following mockup APIs will suffice:

public class PayPalWebService
	{
		public string GetTransactionToken(string username, string password)
		{
			return "Hello from PayPal";
		}

		public string MakeRefund(decimal amount, string transactionId, string token)
		{
			return "Auth";
		}
	}
public class WorldPayWebService
	{
		public string MakeRefund(decimal amount, string transactionId, string username,
			string password, string productId)
		{
			return "Success";
		}
	}

We concentrate on the Refund logic which the two services carry out slightly differently. What’s common is that the MakeRefund methods return a string that describes the result of the action.

We’ll eventually need a refund service that interacts with these API’s somehow but it will need some object that represents the payments. As the payments can go through the two services mentioned above, and possible others in the future, we’ll need an abstraction for them. An abstract base class seems appropriate:

public abstract class PaymentBase
	{
		public abstract string Refund(decimal amount, string transactionId);
	}

We can now create the concrete classes for the PayPal and WorldPay payments:

public class PayPalPayment : PaymentBase
	{
		public string AccountName { get; set; }
		public string Password { get; set; }

		public override string Refund(decimal amount, string transactionId)
		{
			PayPalWebService payPalWebService = new PayPalWebService();
			string token = payPalWebService.GetTransactionToken(AccountName, Password);
			string response = payPalWebService.MakeRefund(amount, transactionId, token);
			return response;
		}
	}
public class WorldPayPayment : PaymentBase
	{
		public string AccountName { get; set; }
		public string Password { get; set; }
		public string ProductId { get; set; }

		public override string Refund(decimal amount, string transactionId)
		{
			WorldPayWebService worldPayWebService = new WorldPayWebService();
			string response = worldPayWebService.MakeRefund(amount, transactionId, AccountName, Password, ProductId);
			return response;
		}
	}

Each concrete Payment class will communicate with the appropriate payment service to log on and request a refund. This follows the Adapter pattern in that we’re wrapping the real API:s in our own classes. We’ll need to be able to identify the correct payment type. In the previous post we used a variable called IsMatch in each concrete type – here we’ll take the Factory approach just to see another way of selecting a concrete class:

public class PaymentFactory
	{
		public static PaymentBase GetPaymentService(PaymentServiceType serviceType)
		{
			switch (serviceType)
			{
				case PaymentServiceType.PayPal:
					return new PayPalPayment();
				case PaymentServiceType.WorldPay:
					return new WorldPayPayment();
				default:
					throw new NotImplementedException("No such service.");
			}
		}
	}

The factory selects the correct implementation using the incoming enumeration. Read the blog post on the Factory pattern if you’re not sure what’s happening here.

We’re ready for the actual refund service which connects the above ingredients:

public class RefundService
{
	public bool Refund(PaymentServiceType paymentServiceType, decimal amount, string transactionId)
	{
		bool refundSuccess = false;
		PaymentBase payment = PaymentFactory.GetPaymentService(paymentServiceType);
		if ((payment as PayPalPayment) != null)
		{
			((PayPalPayment)payment).AccountName = "Andras";
			((PayPalPayment)payment).Password = "Passw0rd";
		}
		else if ((payment as WorldPayPayment) != null)
		{
			((WorldPayPayment)payment).AccountName = "Andras";
			((WorldPayPayment)payment).Password = "Passw0rd";
			((WorldPayPayment)payment).ProductId = "ABC";
		}

		string serviceResponse = payment.Refund(amount, transactionId);

		if (serviceResponse.Contains("Auth") || serviceResponse.Contains("Success"))
		{
			refundSuccess = true;
		}

		return refundSuccess;
	}
}

We get the payment type using the factory. We then immediately need to check its type in order to be able to assign values to the the different properties in it. There are multiple problems with the current implementation:

  • We cannot simply take the payment object returned by the factory, we need to check its type – therefore we cannot substitute the subtype for its base type, hence we break LSP. Such if-else statements where you branch your logic based on some object’s type are telling signs of LSP violation
  • We need to extend the if-else statements as soon as a new provider is implemented, which also violates the Open-Closed Principle
  • We need to extend the serviceResponse.Contains bit as well if a new payment provider returns a different response, such as “OK”
  • The client, i.e. the RefundService object needs to intimately know about the different types of payment providers and their internal setup which greatly increases coupling
  • The client needs to know how to interpret the string responses from the services and that is not the correct approach – the individual services should be the only ones that can do that

The goal is to be able to take the payment object returned by the factory and call its Refund method without worrying about its exact type.

First of all let’s introduce a constructor in each Payment class that force the clients to provide all the necessary parameters:

public class PayPalPayment : PaymentBase
	{
		public PayPalPayment(string accountName, string password)
		{
			AccountName = accountName;
			Password = password;
		}

		public string AccountName { get; set; }
		public string Password { get; set; }

		public override string Refund(decimal amount, string transactionId)
		{
			PayPalWebService payPalWebService = new PayPalWebService();
			string token = payPalWebService.GetTransactionToken(AccountName, Password);
			string response = payPalWebService.MakeRefund(amount, transactionId, token);
			return response;
		}
	}
public class WorldPayPayment : PaymentBase
	{
		public WorldPayPayment(string accountId, string password, string productId)
		{
			AccountName = accountId;
			Password = password;
			ProductId = productId;
		}

		public string AccountName { get; set; }
		public string Password { get; set; }
		public string ProductId { get; set; }

		public override string Refund(decimal amount, string transactionId)
		{
			WorldPayWebService worldPayWebService = new WorldPayWebService();
			string response = worldPayWebService.MakeRefund(amount, transactionId, AccountName, Password, ProductId);
			return response;
		}
	}

We need to update the factory accordingly:

public class PaymentFactory
	{
		public static PaymentBase GetPaymentService(PaymentServiceType serviceType)
		{
			switch (serviceType)
			{
				case PaymentServiceType.PayPal:
					return new PayPalPayment("Andras", "Passw0rd");
				case PaymentServiceType.WorldPay:
					return new WorldPayPayment("Andras", "Passw0rd", "ABC");
				default:
					throw new NotImplementedException("No such service.");
			}
		}
	}

The input parameters are hard-coded to keep things simple. In reality these can be read from a configuration file or sent in as parameters to the GetPaymentService method. We can now improve the RefundService class as follows:

public class RefundService
{
	public bool Refund(PaymentServiceType paymentServiceType, decimal amount, string transactionId)
	{
		bool refundSuccess = false;
		PaymentBase payment = PaymentFactory.GetPaymentService(paymentServiceType);			

		string serviceResponse = payment.Refund(amount, transactionId);

		if (serviceResponse.Contains("Auth") || serviceResponse.Contains("Success"))
		{
			refundSuccess = true;
		}

		return refundSuccess;
	}
}

We got rid of the downcasting issue. We now need to do something about the need to inspect the strings in the Contains method. This if statement still has to be extended if we introduce a new payment service and the client still has to know what “Success” means. If you think about it then ONLY the payment service objects should be concerned with this type of logic. The Refund method returns a string from the payment service but instead the string should be evaluated within the payment service itself, right? Let’s update the return type of the PaymentBase object:

public abstract class PaymentBase
{
	public abstract bool Refund(decimal amount, string transactionId);
}

We can transfer the response interpretation logic to the respective Payment objects:

public class WorldPayPayment : PaymentBase
{
	public WorldPayPayment(string accountId, string password, string productId)
	{
		AccountName = accountId;
		Password = password;
		ProductId = productId;
	}

	public string AccountName { get; set; }
	public string Password { get; set; }
	public string ProductId { get; set; }

	public override bool Refund(decimal amount, string transactionId)
	{
		WorldPayWebService worldPayWebService = new WorldPayWebService();
		string response = worldPayWebService.MakeRefund(amount, transactionId, AccountName, Password, ProductId);
		if (response.Contains("Success"))
			return true;
		return false;
	}
}
public class PayPalPayment : PaymentBase
{
	public PayPalPayment(string accountName, string password)
	{
		AccountName = accountName;
		Password = password;
	}

	public string AccountName { get; set; }
	public string Password { get; set; }

	public override bool Refund(decimal amount, string transactionId)
	{
		PayPalWebService payPalWebService = new PayPalWebService();
		string token = payPalWebService.GetTransactionToken(AccountName, Password);
		string response = payPalWebService.MakeRefund(amount, transactionId, token);
		if (response.Contains("Auth"))
			return true;
		return false;
	}
}

The RefundService has been greatly simplified:

public class RefundService
{
	public bool Refund(PaymentServiceType paymentServiceType, decimal amount, string transactionId)
	{
		PaymentBase payment = PaymentFactory.GetPaymentService(paymentServiceType);
		return payment.Refund(amount, transactionId);
	}
}

There’s no need to downcast anything or to extend this method if a new service is introduced. Strict proponents of the Single Responsibility Principle may argue that the Payment classes are now bloated, they should not know how to process the string response from the web services. However, I think it’s well worth refactoring the initial code this way. It eliminates the drawbacks we started out with. Also, in a Domain Driven Design approach it’s perfectly reasonable to include the logic belonging to a single object within that object and not anywhere else.

A related principle is called ‘Tell, Don’t Ask‘. We violated this principle in the initial solution where we asked the Payment object about its exact type: if you see that you need to interrogate an object about its internal state in order to branch your code then it may be a candidate for refactoring. Move that logic into the object itself within a method and simply call that method. Meaning don’t ask an object about its state, instead tell it to perform what you want it do.

View the list of posts on Architecture and Patterns here.

SOLID design principles in .NET: the Open-Closed Principle

Introduction

In the previous post we talked about the letter ‘S’ in SOLID, i.e. the Single Responsibility Principle. Now it’s time to move to the letter ‘O’ which stands for the Open-Closed Principle (OCP). OCP states that classes should be open for extension and closed for modification. You should be able to add new features and extend a class without changing its internal behaviour. You can always add new behaviour to a class in the future. At the same time you should not have to recompile your application just to make room for new things. The main goal of the principle is to avoid breaking changes in an existing class as it can introduce bugs and errors in other parts of your application.

How is this even possible? The key to success is identifying the areas in your domain that are likely to change and programming to abstractions. Separate out behaviour into abstractions: interfaces and abstract classes. There’s then no limit to the variety of implementations that the dependent class can accept.

Demo

In the demo we’ll first write some code that calculates prices and does not follow OCP. We’ll then refactor that code to a better design. The demo project is very similar to the e-commerce one in the previous post and partially builds upon it so make sure to check it out as well.

Open Visual Studio and create a new console application. Insert a new folder called Model. The following three basic domain objects are the same as in the previous demo:

public class OrderItem
{
	public string Identifier { get; set; }
	public int Quantity { get; set; }
}
public enum PaymentMethod
{
	CreditCard
	, Cheque
}
public class PaymentDetails
{
	public PaymentMethod PaymentMethod { get; set; }
	public string CreditCardNumber { get; set; }
	public DateTime ExpiryDate { get; set; }
	public string CardholderName { get; set; }
}

ShoppingCart looks a bit different. It now includes a price calculation function depending on the Identifier property:

public class ShoppingCart
{
	private readonly List<OrderItem> _orderItems;

	public ShoppingCart()
	{
		_orderItems = new List<OrderItem>();
	}

	public IEnumerable<OrderItem> OrderItems
	{
		get { return _orderItems; }
	}

	public string CustomerEmail { get; set; }

	public void Add(OrderItem orderItem)
	{
		_orderItems.Add(orderItem);
	}

	public decimal TotalAmount()
	{
		decimal total = 0m;
		foreach (OrderItem orderItem in OrderItems)
		{
			if (orderItem.Identifier.StartsWith("Each"))
			{
				total += orderItem.Quantity * 4m;
			}
			else if (orderItem.Identifier.StartsWith("Weight"))
			{
				total += orderItem.Quantity * 3m / 1000; //1 kilogram
			}
			else if (orderItem.Identifier.StartsWith("Spec"))
			{
				total += orderItem.Quantity * .3m;
				int setsOfFour = orderItem.Quantity / 4;
				total -= setsOfFour * .15m; //discount on groups of 4 items
			}
		}
		return total;
	}
}

The TotalAmount function counts the total price in the cart. You can imagine that shops use many different strategies to calculate prices:

  • Price per unit
  • Price per unit of weight, such as price per kilogram
  • Special discount prices: buy 3, get 1 for free
  • Price depending on the Customer’s loyalty: loyal customers get 10% off

And there are many other strategies out there. Some of these are represented in the TotalAmount function by magic strings retrieved from the Identifier of the product. The decimals ‘5m’ etc. are the dollar prices. So here every product has the same price for simplicity.

Such pricing rules are probably changing a lot in a real word business. Meaning that programmer will need to revisit this if-else statement quite often to extend it with new rules and modify the existing ones. That type of code gets quickly out of hand. Imagine 100 else-if statements with possibly nested ifs with more complex rules. If it’s Christmas AND you are a loyal customer AND you have a special coupon then the final price may depend on each of these conditions. Debugging and maintaining that code would soon become a nightmare. It would be a lot better if this particular method didn’t have to be modified at all. In other words we’d like to apply OCP so that we don’t need to extend this particular code every time there’s a change in the pricing rules.

Extending the if-else statements can introduce bugs and the application must be re-tested. We’ll need to test the ShoppingCart whereas we’re only interested in testing the pricing rule(s). Also, the pricing logic is tightly coupled with the ShoppingCart domain. Therefore if we change the pricing logic in the ShoppingCart object we’ll need to test all other objects that depend on ShoppingCart even if they absolutely have nothing to do with pricing rules. A more intelligent solution is to separate out the pricing logic to different classes and hide them behind an abstraction that ShoppingCart can refer to. The result is that you’ll have a higher number of classes but they are typically small and concentrate on some very specific functionality. This idea refers back to the Single Responsibility Principle of the previous post.

There are other advantages to creating new classes: they can be tested in isolation, there’s no other class that’s dependent on them – at least to begin with-, and as they are NEW classes in your code they have no legacy coupling to make them hard to design or test.

There are at least two design patterns that can come to the rescue: the Strategy Pattern and the Template Pattern. We’ll solve our particular problem using the strategy pattern. If you don’t know what it is about then make sure to check out the link I’ve provided, I won’t introduce the pattern from scratch here.

Let’s first introduce an abstraction for a pricing strategy:

public interface IPriceStrategy
{
	bool IsMatch(OrderItem item);
	decimal CalculatePrice(OrderItem item);
}

The purpose of the IsMatch method will be to determine which concrete strategy to pick based on the OrderItem. This could be performed by a factory as well but it would probably make the solution more complex than necessary.

Let’s translate the if-else statements into concrete pricing strategies. We’ll start with the price per unit strategy:

public class PricePerUnitStrategy : IPriceStrategy
{
	public bool IsMatch(OrderItem item)
	{
		return item.Identifier.StartsWith("Each");
	}

	public decimal CalculatePrice(OrderItem item)
	{
		return item.Quantity * 4m;
	}
}

We still base the strategy selection strategy on the product identifier. This may be good or bad, but that’s a separate discussion. The main point is that the strategy selection and price calculation logic is encapsulated within this separate class. We’ll do something similar to the other strategies:

public class PricePerKilogramStrategy : IPriceStrategy
{
	public bool IsMatch(OrderItem item)
	{
		return item.Identifier.StartsWith("Weight");
	}

	public decimal CalculatePrice(OrderItem item)
	{
		return item.Quantity * 3m / 1000;
	}
}
public class SpecialPriceStrategy : IPriceStrategy
{
	public bool IsMatch(OrderItem item)
	{
		return item.Identifier.StartsWith("Spec");
	}

	public decimal CalculatePrice(OrderItem item)
	{
		decimal total = 0m;
		total += item.Quantity * .3m;
		int setsOfFour = item.Quantity / 4;
		total -= setsOfFour * .15m;
		return total;
	}
}

The next step is to introduce a calculator that will calculate the correct price. We’ll hide the calculator behind an interface to follow good programming practices:

public interface IPriceCalculator
{
	decimal CalculatePrice(OrderItem item);
}

That’s quite minimalistic but it will suffice. Often good OOP software will have many small classes and interfaces that concentrate on very specific tasks.

The implementation will select the correct strategy and calculate the price:

public class DefaultPriceCalculator : IPriceCalculator
{
	private readonly List<IPriceStrategy> _pricingRules;

	public DefaultPriceCalculator()
        {
            _pricingRules = new List<IPriceStrategy>();
            _pricingRules.Add(new PricePerKilogramStrategy());
            _pricingRules.Add(new PricePerUnitStrategy());
            _pricingRules.Add(new SpecialPriceStrategy());
        }

	public decimal CalculatePrice(OrderItem item)
	{
		return _pricingRules.First(r => r.IsMatch(item)).CalculatePrice(item);
	}
}

We store the list of possible strategies in the constructor. In the CalculatePrice method we select the suitable pricing strategy based on LINQ and the IsMatch implementations and we call its CalculatePrice method.

Now we’re ready to simplify the ShoppingCart object:

public class ShoppingCart
{
	private readonly List<OrderItem> _orderItems;
        private readonly IPriceCalculator _priceCalculator;

        public ShoppingCart(IPriceCalculator priceCalculator)
        {
            _priceCalculator = priceCalculator;
            _orderItems = new List<OrderItem>();
        }

        public IEnumerable<OrderItem> OrderItems
        {
            get { return _orderItems; }
        }

        public string CustomerEmail { get; set; }

        public void Add(OrderItem orderItem)
        {
            _orderItems.Add(orderItem);
        }

        public decimal TotalAmount()
        {
            decimal total = 0m;
            foreach (OrderItem orderItem in OrderItems)
            {
                total += _priceCalculator.CalculatePrice(orderItem);
            }
            return total;
        }
}

All the consumer of the ShoppingCart class needs to do is to specify a concrete IPriceCalculator object, such as the DefaultPriceCalculator one and let it calculate the price based on the items in the shopping cart. The ShoppingCart is no longer responsible for the actual price calculation. That has been factored out to abstractions and smaller classes that are easy to test and carry out very specific tasks.

What if the domain owner comes along and tell you that there’s a new pricing rule? Now instead of having to go through the if-else statements you can simply create a new pricing strategy:

public class BuyThreeGetOneFree : IPriceStrategy
{
	public bool IsMatch(OrderItem item)
	{
		return item.Identifier.StartsWith("Buy3OneFree");
	}

	public decimal CalculatePrice(OrderItem item)
	{
		decimal total = 0m;
		total += item.Quantity * 1m;
		int setsOfThree = item.Quantity / 3;
		total -= setsOfThree * 1m;
		return total;
	}
}

Add this new concrete class to the DefaultPriceCalculator class constructor and it will be found by the LINQ statement.

Alternative solution for the calculator

Based on the message by Frederik in the comments section here comes another, refactored solution of the price calculator:

public class DefaultPriceCalculator : IPriceCalculator
{
    private readonly IEnumerable<IPriceStrategy> _priceStrategies;

    public DefaultPriceCalculator(IEnumerable<IPriceStrategy> priceStrategies)
    {
        _priceStrategies = priceStrategies;
    }

    public decimal CalculatePrice(OrderItem item)
    {
        return _priceStrategies.First(r => r.IsMatch(item)).CalculatePrice(item);
    }
}

Conclusion

Now you may think that you’ll need to introduce abstractions everywhere in your code for every little task. That’s not entirely correct. If you have a domain whose functionality changes a lot then you can apply OCP right away. Otherwise you may be better off not to introduce abstractions at first because they also make your code somewhat more complex. This may be the case with brand new domains in your application where you just don’t have enough experience and the domain expert cannot help you either. In such a case start off with the simplest possible design, even if it involves an if statement with a magic string. It may even be acceptable to later introduce an else statement with another magic string to accommodate a change in the logic. However, as soon as you see that you have to change and/or extend that particular functionality then factor it out to an abstraction. The following motto applies here:

“Fool me once, shame on you;fool me twice, shame on me.”

OCP doesn’t come for free. Implementing OCP will cost you some hours of refactoring and will add complexity to your design. Also, keep in mind that there’s probably no design that guarantees that you won’t have to change it at some point. The key is to identify those areas in your domain that are volatile and likely to change over time.

View the list of posts on Architecture and Patterns here.

SOLID design principles in .NET: the Single Responsibility Principle

The SOLID design principles are a collection of best practices for object oriented software design. The abbreviation comes from the first letter of each of the following 5 constituents:

  • Single responsibility principle (SRP)
  • Open-Closed principle (OCP)
  • Liskov substitution principle (LSP)
  • Interface segregation principle (ISP)
  • Dependency inversion principle (DIP)

Each of these terms are meant to make your code base easier to understand and maintain. They also ensure that your code does not become a mess with a large degree of interdependency that nobody wants to debug and extend. Of course you can write functioning software without adhering to these guidelines but they are a good investment in the future development of your product especially as far as maintainability and extensibility are concerned. Also, by following these points your code will become more object oriented instead of employing a more procedural style of coding with a lot of magic strings and enumerations and other primitives.

However, the principles do not replace the need for maintaining and refactoring your code so that it doesn’t get stale. They are a good set of tools to make your future work with your code easier. We will look at each of these in this series.

You can view these principles as guidelines. You should write code with these guidelines in mind and should aim to get as far as possible to reach each of them. You won’t always succeed of course, but even a bit of SOLID is more than the total lack of it.

SRP introduction

The Single Responsibility Principle states that every object should only have one reason to change and a single focus of responsibility. In other words every object should perform one thing only. You can apply this idea at different levels of your software: a method should only carry out one action; a domain object should only represent one domain within your business; the presentation layer should only be responsible for presenting your data; etc. This principle aims to achieve the following goals:

  • Short and concise objects: avoid the problem of a monolithic class design that is the software equivalent of a Swiss army knife
  • Testability: if a method carries out multiple tasks then it’s difficult to write a test for it
  • Readability: reading short and concise code is certainly easier than finding your way through some spaghetti code
  • Easier maintenance

A responsibility of a class usually represents a feature or a domain in your application. If you assign many responsibilities to a class or bloat your domain object then there’s a greater chance that you’ll need to change that class later. These responsibilities will be coupled together in the class making each individual responsibility more difficult to change without introducing errors in another. We can also call a responsibility a “reason to change”.

SRP is strongly related to what is called Separation of Concerns (SoC). SoC means dissecting a piece of software into distinct features that encapsulate unique behaviour and data that can be used by other classes. Here the term ‘concern’ represents a feature or behaviour of a class. Separating a programme into small and discrete ‘ingredients’ significantly increases code reuse, maintenance and testability.

Other related terms include the following:

  • Cohesion: how strongly related and focused the various responsibilities of a module are
  • Coupling: the degree to which each programme module relies on each one of the other modules

In a good software design we are striving for a high level of cohesion and a low level of coupling. A high level of coupling, also called tight coupling, usually means a lot of concrete dependency among the various elements of your software. This leads to a situation where changing the design of one class leads to the need of changing other classes that are dependent on the class you’ve just changed. Also, with tight coupling changing the design of one class can introduce errors in the dependent classes.

One last related technique is Test Driven Design or Test Driven Development (TDD). If you apply the test first approach of TDD and write your tests carefully then it will help you fulfil SRP, or at least it is a good way to ensure that you’re not too far from SRP. If you don’t know what TDD is then you can read about it here.

Demo

In the demo we’ll simulate an e-commerce application. We’ll first deliberately introduce a bloated Order object with a lot of responsibilities. We’ll then refactor the code to get closer to SRP.

Open Visual Studio and create a new console application. Insert a new folder called Model and insert a couple of basic models into it:

public class OrderItem
{
	public string Identifier { get; set; }
	public int Quantity { get; set; }
}
public class ShoppingCart
{
	public decimal TotalAmount { get; set; }
	public IEnumerable<OrderItem> Items { get; set; }
	public string CustomerEmail { get; set; }
}
public enum PaymentMethod
{
	CreditCard
	, Cheque
}
public class PaymentDetails
{
	public PaymentMethod PaymentMethod { get; set; }
	public string CreditCardNumber { get; set; }
	public DateTime ExpiryDate { get; set; }
	public string CardholderName { get; set; }
}

This is all pretty simple up this point I believe. Now comes the most important domain object, Order, which has quite many areas of responsibility:

public class Order
{
	public void Checkout(ShoppingCart shoppingCart, PaymentDetails paymentDetails, bool notifyCustomer)
	{
		if (paymentDetails.PaymentMethod == PaymentMethod.CreditCard)
		{
			ChargeCard(paymentDetails, shoppingCart);
		}

		ReserveInventory(shoppingCart);

		if (notifyCustomer)
		{
			NotifyCustomer(shoppingCart);
		}
	}

	public void NotifyCustomer(ShoppingCart cart)
	{
		string customerEmail = cart.CustomerEmail;
		if (!String.IsNullOrEmpty(customerEmail))
		{
			try
			{
				//construct the email message and send it, implementation ignored
			}
			catch (Exception ex)
			{
				//log the emailing error, implementation ignored
			}
		}
	}

	public void ReserveInventory(ShoppingCart cart)
	{
		foreach (OrderItem item in cart.Items)
		{
			try
			{
				InventoryService inventoryService = new InventoryService();
				inventoryService.Reserve(item.Identifier, item.Quantity);

			}
			catch (InsufficientInventoryException ex)
			{
				throw new OrderException("Insufficient inventory for item " + item.Sku, ex);
			}
			catch (Exception ex)
			{
				throw new OrderException("Problem reserving inventory", ex);
			}
		}
	}

	public void ChargeCard(PaymentDetails paymentDetails, ShoppingCart cart)
	{
		PaymentService paymentService = new PaymentService();

		try
		{
			paymentService.Credentials = "Credentials";
			paymentService.CardNumber = paymentDetails.CreditCardNumber;
			paymentService.ExpiryDate = paymentDetails.ExpiryDate;
			paymentService.NameOnCard = paymentDetails.CardholderName;
			paymentService.AmountToCharge = cart.TotalAmount;

			paymentService.Charge();
		}
		catch (AccountBalanceMismatchException ex)
		{
			throw new OrderException("The card gateway rejected the card based on the address provided.", ex);
		}
		catch (Exception ex)
		{
			throw new OrderException("There was a problem with your card.", ex);
		}

	}
}

public class OrderException : Exception
{
	public OrderException(string message, Exception innerException)
		: base(message, innerException)
	{
	}
}

The Order class won’t compile yet, so add a new folder called Services with the following objects representing the Inventory and Payment services:

public class InventoryService
{
	public void Reserve(string identifier, int quantity)
	{
		throw new InsufficientInventoryException();
	}
}

public class InsufficientInventoryException : Exception
{
}
public class PaymentService
{
	public string CardNumber { get; set; }
	public string Credentials { get; set; }
	public DateTime ExpiryDate { get; set; }
	public string NameOnCard { get; set; }
	public decimal AmountToCharge { get; set; }
	public void Charge()
	{
		throw new AccountBalanceMismatchException();
	}
}

public class AccountBalanceMismatchException : Exception
{
}

These are two very simple services with no real implementation that only throw exceptions.

Looking at the Order class we see that it performs a lot of stuff: checking out after the customer has placed an order, sending emails, logging exceptions, charging the credit card etc. Probably the most important method here is Checkout which calls upon the other methods in the class.

What is the problem with this design? After all it works well, customers can place orders, they get notified etc.

I think first and foremost the greatest flaw is a conceptual one actually. What has the Order domain object got to do with sending emails? What does it have to do with checking the inventory, logging exceptions or charging the credit card? These are all concepts that simply do not belong in an Order domain.

Imagine that the Order object can be used by different platforms: an e-commerce website with credit card payments or a physical shop where you pick your own goods from the shelf and pay by cash. Which leads to several other issues as well:

  • Cheque payments don’t need card processing: cards are only charged in the Checkout method if the customer is paying by card – in any other case we should not involve the idea of card processing at all
  • Inventory reservations should be carried out by a separate service in case we’re buying in a physical shop
  • The customer will probably only need an email notification if they use the web platform of the business – otherwise the customer won’t even provide an email address. After all, why would you want to be notified by email if you buy the goods in person in a shop?

The problem here is that no matter what platform consumes the Order object it will need to know about the concepts of inventory management, credit card processing and emails. So any change in these concepts will affect not only the Order object but all others that depend on it.

Let’s refactor to a better design. The key is to regroup the responsibilities of the Checkout method into smaller units. Add a new folder called SRP to the project so that you’ll have access to the objects before and after the refactoring.

We know that we can process several types of Order: an online order, a cash order, a cheque order and possibly other types of Order that we haven’t thought of. This calls for an abstract Order object:

public abstract class Order
{
	private readonly ShoppingCart _shoppingCart;

	public Order(ShoppingCart shoppingCart)
	{
		_shoppingCart = shoppingCart;
	}

        public ShoppingCart ShoppingCart
	{
		get
		{
			return _shoppingCart;
		}
	}

	public virtual void Checkout()
	{
		//add common functionality to all Checkout operations
	}
}

We’ll separate out the responsibilities of the original Checkout method into interfaces:

public interface IReservationService
{
	void ReserveInventory(IEnumerable<OrderItem> items);
}
public interface IPaymentService
{
	void ProcessCreditCard(PaymentDetails paymentDetails, decimal moneyAmount);
}
public interface INotificationService
{
	void NotifyCustomerOrderCreated(ShoppingCart cart);
}

So we separated out the inventory management, customer notification and payment services into their respective interfaces. We can now create some concrete Order objects. The simplest case is when you go to a shop, place your goods into a real shopping cart and pay at the cashier. There’s no credit card process and no email notification. Also, the inventory has probably been reduced when the goods were placed on the shelf, there’s no need to reduce the inventory further when the actual purchase happens:

class CashOrder : Order
{
	public CashOrder(ShoppingCart shoppingCart)
		: base(shoppingCart)
	{ }
}

That’s all for the cash order which represents an immediate purchase in a shop where the customer pays by cash. You can of course pay by credit card in a shop so let’s create another order type:

public class CreditCardOrder : Order
{
	private readonly PaymentDetails _paymentDetails;
	private readonly IPaymentService _paymentService;

	public CreditCardOrder(ShoppingCart shoppingCart
		, PaymentDetails paymentDetails, IPaymentService paymentService) : base(shoppingCart)
	{
		_paymentDetails = paymentDetails;
		_paymentService = paymentService;
	}

	public override void Checkout()
	{
		_paymentService.ProcessCreditCard(_paymentDetails, ShoppingCart.TotalAmount);
		base.Checkout();
	}
}

The credit card payment must be processed hence we’ll need a Payment service to take care of that. We call upon its ProcessCreditCard method in the overridden Checkout method. Here the consumer platform can provide some concrete implementation of the IPaymentService interface, it doesn’t matter to the Order object.

Lastly we can have an online order with inventory management, payment service and email notifications:

public class OnlineOrder : Order
{
	private readonly INotificationService _notificationService;
	private readonly PaymentDetails _paymentDetails;
	private readonly IPaymentService _paymentService;
	private readonly IReservationService _reservationService;

	public OnlineOrder(ShoppingCart shoppingCart,
		PaymentDetails paymentDetails, INotificationService notificationService
		, IPaymentService paymentService, IReservationService reservationService)
		: base(shoppingCart)
	{
		_paymentDetails = paymentDetails;
		_paymentService = paymentService;
		_reservationService = reservationService;
		_notificationService = notificationService;
	}

	public override void Checkout()
	{
		_paymentService.ProcessCreditCard(_paymentDetails, ShoppingCart.TotalAmount);
		_reservationService.ReserveInventory(ShoppingCart.Items);
		_notificationService.NotifyCustomerOrderCreated(ShoppingCart);
		base.Checkout();
	}
}

The consumer application will provide concrete implementations for the notification, inventory management and payment services. The OnlineOrder object will not care what those implementations look like and will not be affected at all if you make a change in those implementations or send in a different concrete implementation. As you can see these are the responsibilities that are likely to change over time. However, the Order object and its concrete implementations won’t care any more.

Furthermore, a web platform will only concern itself with online orders now and not with point-of-sale ones such as CreditOrder and CashOrder. The platform that a cashier uses in the shop will probably use CashOrder and CreditOrder objects depending on the payment method of the customer. The web and point-of-sale platforms will no longer be affected by changes made to the inventory management, email notification and payment processing logic.

Also, note that we separated out the responsibilities into individual smaller interfaces and not a single large one with all responsibilities. This follows the letter ‘I’ in solid, the Interface Segregation Principle, that we’ll look at in a future post.

We are done with the refactoring, at least as far as SRP is concerned. We can still take up other areas of improvement such as making the Order domain object cleaner by creating application services that will take care of the Checkout process. It may not be correct to put all these services in a single domain object, but it depends on the philosophy you follow in your domain design. That leads us to discussions on DDD (Domain Driven Design) which is not the scope of this post.

View the list of posts on Architecture and Patterns here.

Design patterns and practices in .NET: the Service Locator anti-pattern

Introduction

The main responsibility of a Service Locator is to serve instances of services when consumers request them. The pattern is strongly linked to Dependency Injection and was introduced by Martin Fowler here.

The most common implementation of the pattern introduces a static factory. This factory can be configured with concrete services in the composition root of the application, such as global.asax, Main, etc., depending on the type of the application you’re developing. In other words the configuration happens before the first consumer can use it to extract a concrete service. Here you can think of a service as roughly equal to a dependency: the CustomerController has a dependency on ICustomerService. CustomerService has a dependency on ICustromerRepository etc. So when a concrete implementation of the abstraction is needed then the caller tries to grab it from the Service Locator.

A Service Locator is quite similar to Inversion-of-Control (IoC) containers at first. If you’re familiar with some IoCs such as StructureMap or CastleWindsor, then you’ll know that you can register your concrete types in the composition root. In StructureMap you can do this explicitly as follows:

x.For<ICustomerService>().Use<ConcreteCustomerService>();

The Service Locator configuration starts off in a similar manner. It’s essentially a dictionary of abstractions and their desired concrete types: ICustomerRepository – CustomerRepository; IProductService – ProductService. This is perfectly legitimate to do from the composition root. As we will see later consulting the service locator elsewhere in the application for concrete services is an anti-pattern.

Demo

We’ll simulate a dependency between the CustomerService and CustomerRepository classes where CustomerService requires a customer repository to consult the database for queries on the customer domain. Open Visual Studio and add the following standard generic implementation of a ServiceLocator:

public static class ServiceLocator
{
	private readonly static Dictionary<Type, object> _configuredServices = new Dictionary<Type, object>();

	public static T GetConfiguredService<T>()
	{
		return (T)ServiceLocator._configuredServices[typeof(T)];
	}

	public static void Register<T>(T service)
	{
		ServiceLocator._configuredServices[typeof(T)] = service;
	}
}

This is a very minimalistic implementation of the Service Locator. It’s void of exception handling, guard clauses, loading the dependency graph from an XML file but those features only add noise to the main discussion. The dependency map is stored in the private dictionary and the Register method is used, as you’ve probably guessed it, to register dependencies. It is analogous to the .For and .Use extension methods in the StructureMap example above. Let’s add the following interfaces and classes to see how the locator can be used:

public class Customer
{
}
public interface ICustomerService
{
	Customer GetCustomer(int id);
}
public interface ICustomerRepository
{
	Customer GetCustomerFromDatabase(int id);
}
public class CustomerRepository : ICustomerRepository
{
	public Customer GetCustomerFromDatabase(int id)
	{
		return new Customer();
	}
}
public class CustomerService : ICustomerService
{
	private ICustomerRepository _customerRepository;

	public CustomerService()
	{
		_customerRepository = ServiceLocator.GetConfiguredService<ICustomerRepository>();
	}

	public Customer GetCustomer(int id)
	{
		return _customerRepository.GetCustomerFromDatabase(id);
	}
}

You should be able to follow along this far. The CustomerService class resolves its own dependency using the ServiceLocator. You can configure the locator in Main as follows:

static void Main(string[] args)
{
	ServiceLocator.Register<ICustomerRepository>(new CustomerRepository());
	Customer c = new CustomerService().GetCustomer(54);
}

Main represents the composition root of a Console application so that’s where you can register the dependency graph. Step through the app with F11 and you’ll see that CustomerRepository is registered and retrieved as expected.

The CustomerService class can resolve its own dependency on ICustomerRepository, so what’s the problem? We can register our concrete implementations, retrieve the stored implementation where it’s needed, register Mock objects as concrete types in a Test Driven Design scenario, program against abstractions, write maintainable code, support late binding by changing the registration, so you’re a happy bunny, right? You shouldn’t be as the ServiceLocator class has a negative effect on the re-usability of the classes that consume it:

  • The ServiceLocator dependency will drag along if you try to re-use a class with a call to the locator
  • It is not obvious for external clients calling CustomerService() that Dependency Injection is used

The CustomerService will loosely depend on CustomerRepository through the ICustomerRepository interface. This is perfectly legitimate and valid. However, it will be tightly coupled to the ServiceLocator class. Here’s the dependency graph:

Dependency graph with service locator

If you want to distribute the CustomerService class then you’ll have to attach the ServiceLocator class to the package. It must come along even if the person that wants to use your class is not intending to use the ServiceLocator class in any way because they have their own Dependency Injection solution, such as StructureMap or CastleWindsor. Also, the consumer will need to set up ServiceLocator in the composition root otherwise they will get an exception. As the ServiceLocator may well reside in a different module, even that module must be redistributed for the CustomerService to be usable.

ProductService forces its users to follow the Dependency Injection strategy employed within it. There’s no room for other strategies unfortunately. Developers must simply accept the existence of the service locator. Also, there’s no way of telling that there’s a direct dependency just by looking at its signature which is what consumers will see first when creating a new CustomerService object. If the consumer doesn’t set up ServiceLocator appropriately then they will get an exception when using the CustomerService constructor. Depending on the exception handling strategy all they may get is a KeyNotFoundException. The consumer will then ask the questions: what key? Why is it not found? What are you talking about? WHY YOU NO WORK!!??? The consumer must know about the internals of the ConsumerService class which usually indicates a higher-than-desired level of coupling.

We can therefore rule out this patterns as it brings with it a fully redundant dependency which we can get rid of easily using constructor injection:

public CustomerService(ICustomerRepository customerRepository)
{
	_customerRepository = customerRepository;
}

There’s simply no advantage with this pattern that cannot be solved with an alternative solution such as constructor injection coupled with an IoC container. ProductService as it stands is not self-documenting. Its signature does not reveal anything about its dependency needs. Imagine that you download this API from NuGet and call CustomerService service = new CustomerService(). Your assumption would be that this is a fairly simple class that does not have any external dependencies which is not true as it turns out.

You can misuse IoC containers in the same way actually. It’s fine to use IoCs to resolve your dependencies “behind the scenes” but they – or at least some of them – allow the users to fetch concrete types from the container. In StructureMap you’d do it as follows:

StructureMap.ObjectFactory.Container.GetInstance<CustomerRepository>()

You should avoid using this type of dependency resolution for the same reasons why you wouldn’t use a ServiceLocator and its GetConfiguredService method.

Note that this pattern being an anti-pattern is a controversial topic. You can check out this post that offers another viewpoint and argues that Service Locator is indeed a proper design pattern.

View the list of posts on Architecture and Patterns here.

Design patterns and practices in .NET: the Prototype pattern

Introduction

The formal definition of the prototype pattern is the following: specify the kinds of objects to create using a prototypical instance and create new objects by copying this prototype.

What this basically says is instead of using ‘new’ to create a new object we’re going to use a prototype, an existing object to specify the new objects we’re going to create. Then we create new objects by copying from this prototype. So the prototype is a master, a blueprint and the other objects we create will be copies of that object. Another word that can be used instead of ‘copy’ is ‘clone’. So this pattern is very much about cloning objects. A real life example could be a photocopy machine that can get you exact copies of the original document instead of asking the original source to send you a brand new one. Making a copy in this case is a cheaper and a lot more efficient way of getting a copy of the object, i.e. the document.

The implementation of the pattern is very easy as you’ll see, almost confusingly easy. You may even ask yourself the question: is this really a pattern??

Demo

Open Visual Studio and create a new console application. We’ll simulate a reader that analyses the contents of web pages. You’ll need a reference to the System.Net.Http library. Insert the following class:

public class DocumentReader
{
	private string _pageTitle;
	private int _headerCount;
	private string _bodyContent;

	public DocumentReader(Uri uri)
	{
		HttpClient httpClient = new HttpClient();
		Task<string> contents = httpClient.GetStringAsync(uri);
		string stringContents = contents.Result;
		Analyse(stringContents);
	}

	private void Analyse(string stringContents)
	{
		_pageTitle = "Homepage";
		_headerCount = 2;
		_bodyContent = "Welcome to my homepage";
	}

	public void PrintPageData()
	{
		Console.WriteLine("Page title: {0}, header count: {1}, body: {2}", _pageTitle, _headerCount, _bodyContent);
	}
}

So we send in a URI to the constructor which downloads the string content of that URI. The Analyse method then fakes a true string content analysis. PrintPageData simply prints these findings in the console.

You can use this reader from Main as follows:

static void Main(string[] args)
		{
			DocumentReader reader = new DocumentReader(new Uri("http://bbc.co.uk"));
			reader.PrintPageData();
			Console.ReadKey();
		}

In a true implementation of the document reader we would probably parse the HTML document and try to find the real title, the body contents, the headers and lot more properties. However, even a true implementation of the Analyse method would run a lot faster than the actual download in the httpClient.GetStringAsync(uri) call. You’ll see that there’s a delay before we see the printout. The delay is not very significant as the HttpClient object coupled with the Task library is very efficient. However, we don’t want to cause the same delay if we need a copy of the page data.

The first solution is of course to create a new copy of the document reader, pass in bbc.co.uk and let it get the page data again. In other words we need to make the web request twice which is probably not very clever if we need a copy of the data that’s already been constructed. This is where the prototype pattern comes into the picture: we can make a copy of the document reader without having to perform the HTTP web request.

As it turns out the prototype pattern can be implemented using an interface available in .NET, the IClonable interface. The interface itself represents the abstract prototype; by the implementing object will itself be of type IClonable, i.e. a concrete prototype. The prototype will need to define a method which makes a copy of the object. The IClonable interface has a Clone() method which has this very purpose. The concrete prototype will have the ability to copy itself in the Clone() method where you can choose between creating a deep copy or a shallow copy, more on this later.

Let’s see how it’s done:

public class DocumentReader : ICloneable
{
	private string _pageTitle;
	private int _headerCount;
	private string _bodyContent;

	public DocumentReader(Uri uri)
	{
		HttpClient httpClient = new HttpClient();
		Task<string> contents = httpClient.GetStringAsync(uri);
		string stringContents = contents.Result;
		Analyse(stringContents);
	}

	private void Analyse(string stringContents)
	{
		_pageTitle = "Homepage";
		_headerCount = 2;
		_bodyContent = "Welcome to my homepage";
	}

	public void PrintPageData()
	{
		Console.WriteLine("Page title: {0}, header count: {1}, body: {2}", _pageTitle, _headerCount, _bodyContent);
	}

	public object Clone()
	{
		return MemberwiseClone();
	}
}

The interface has one member to be implemented which is the Clone method. Every object in .NET has built-in method called MemberwiseClone which suits our purposes just fine. It is going to copy all the data that exist in the original object, i.e. the prototype. It returns an object with the same data inside. However, be careful with this method as it cannot copy complex objects. Say that the DocumentReader had another object, like WebPage which in turn has its own private members, then MemberwiseClone will not copy those. In other words it creates a shallow copy as opposed to a deep copy. It copies the reference of complex objects instead of the objects themselves. However, it may be enough depending on what you want to achieve. Probably reading data from the same reference is OK, but not making changes to that reference. If you want perform a deep copy then you’ll have to manually make a memberwise clone of the entire object graph.

You can use this code in Main as follows:

static void Main(string[] args)
{
	DocumentReader reader = new DocumentReader(new Uri("http://bbc.co.uk"));
	reader.PrintPageData();

	DocumentReader readerClone = reader.Clone() as DocumentReader;
	readerClone.PrintPageData();

	Console.ReadKey();
}

Go ahead and run this and you’ll see that there’s no delay at all before the second printout appears in the console window.

This is the easiest implementation of the prototype pattern in .NET. It doesn’t make any sense to go through the object construction again and make the second web request.

Another similar scenario would have been making the same database calls. Often this is not necessary if all you need is the same set of data.

Yet another example is when you need a copy of an object with the same state. Imagine an object which has several private fields and those fields can be manipulated with public objects such as the following:

  1. TurnRight(int speed)
  2. GoStraightAhead()
  3. Stop()
  4. BuySomethingInTheShop(int productNumber)

These methods can modify the internal state of the object. In case you need another object with the same internal state then you’d need to go through the same steps as above. You’ll need to keep track of these steps and the user inputs as well.

A better solution is to implement the IClonable interface and clone the original object. You’ll then have access to the same state as in the prototype.

View the list of posts on Architecture and Patterns here.

Elliot Balynn's Blog

A directory of wonderful thoughts

Software Engineering

Web development

Disparate Opinions

Various tidbits

chsakell's Blog

WEB APPLICATION DEVELOPMENT TUTORIALS WITH OPEN-SOURCE PROJECTS

Once Upon a Camayoc

ARCHIVED: Bite-size insight on Cyber Security for the not too technical.