A model .NET web service based on Domain Driven Design Part 7: the abstract Service

Introduction

Services can come in several different forms in an application. Services are not associated with things, but rather actions and activities. Services are the ”doers”, the manager classes and may span multiple domain objects. They define what they can DO for a client. They are unlikely to have attributes as Entities or Value objects do and will absolutely lack any form of identity. This post is the direct continuation of the previous post.

We can put services into three different groups:

Infrastructure services

These are all the cross-cutting services that you put into the Infrastructure layer: logging, emailing, caching etc. We’ve built a thin infrastructure layer in this skeleton.

Domain services

Normally logic pertaining to a single domain should be encapsulated within the body of that domain. If the domain object needs other domain objects to perform some logic then you can send in those as parameters. E.g. to determine whether a Person domain can loan a certain book you can have the following method in the Person class:

private bool CanLoanBook(Book book)
{
     if (!this.IsLateWithOtherBooks && !book.OnLoan) return true;
     return false;
}

This is a very small and uncomplicated piece of code.

However, this is not always the case. It may be so that this logic becomes so involved that putting it in one domain object would make that domain bloated with logic that spans multiple other domains. Example: a money transfer may involve domain objects such as Bank, Account, Customer, Transfer and possibly more. The money may only be transferred if there’s enough deposit on one Account, if the Customer has preferred status, if the Bank is not located in a money-laundering paradise etc. Where should this validation logic go? Probably the best solution is to build a specially designated IMoneyTransferService interface with a method called MakeTransfer(params).

Domain services help you avoid bloating the logic of one domain object. Just like other service types, they come in the form of interfaces and implementations of those interfaces. They will have operations that do not fit the domain object it is acting on. However, be careful not to transfer all business rules of the domain to a service. These services will be part of the domain layer.

Application services

Application services normally only carry out co-ordination tasks and are void of any extra logic themselves. They consult repositories and possibly other services to carry out tasks on behalf of the caller. The caller is typically some front end layer: an MVC controller in a web project, a Web API controller in a Web API project or a XAML view-model in an MVVM WPF project. Application services provide therefore the connecting tissue between the backend and the front end of the application.

It is this type of service that we’ll implement in this post. Before we do that let’s look at a couple of new terms.

The Request-Response pattern

The Request-Response pattern is a common pattern used within messaging in SOA – Service-Oriented Architecture. Consider the following interface of a service:

IEnumerable<Product> GetProducts(string region);
IEnumerable<Product> GetProducts(string region, string salesPoint);
IEnumerable<Product> GetProducts(string region, string salesPoint, DateTime fromDate);
IEnumerable<Product> GetProducts(string region, string salesPoint, DateTime fromDate, DateTime toDate);

The consumer can communicate with this interface in 4 different ways to retrieve a set of products. You can imagine that this list can grow on and on making the maintenance of the service difficult. It’s easier to assign an object to hold all these criteria:

public class GetProductsRequest
{
    public string Region {get; set;}
    public string SalesPoint {get; set;}
    public DateTime FromDate {get; set;}
    public DateTime ToDate {get; set;}
}

You can then simplify the service as follows:

GetProductsResponse GetProducts(GetProductsRequest request);

…where GetProductsResponse may look as simple as the following:

public class GetProductsResponse
{
    public IEnumerable<Product> Products {get; set;}
}

All Response objects can derive from a base response to show e.g. if the search operation was successful, if there are any specific error messages etc. The Amazon SDK for .NET uses this pattern heavily. You can check out a couple of examples here and here.

I’ll use this pattern in the service layer. You may think it is overkill in some cases but it’s up to you how you take ideas from here and how you apply them.

View models

View models are objects that show the view representation of a domain. E.g. a domain may have a decimal price property with no specific formatting. You may want to show that price in your visitor’s native format based on their culture settings. That will be the view representation of the price. Normally domain objects are not concerned with view details. That task is delegated to view models. Also, as we said before, domains should be void of technology-related things, such as MVC attributes. This is where a view model comes in handy as you are free to do to it what you want. Add attributes, format strings, add extra properties that the UI is expecting etc.

In an MVC UI layer it is quite customary to have view models injected into the views instead of the plain domains otherwise the Razor code may need to perform extra formatting which is not what it’s supposed to do.

View models are therefore a technique to decouple the domain from its view-specific representation(s). A domain can have multiple different views depending on e.g. who is requesting the data: an admin, a simple user, Elvis Presley etc.

In this example app we’ll employ a Customer view model in the service layer that will be served to the Web layer which we’ll look at later. Note, that this is not a must in order to build a layered solution. Some readers may even find it excessive as it represents an extra round of conversion – I’m showing you how it can be done and I’ll let you decide whether it’s something for you or not.

Also note that the type of view-model I’m talking about now is not the same as view-models in the Model-View-ViewModel (MVVM) pattern used extensively in XAML-based technologies, such as Silverlight, WPF Windows 8+ store apps. In MVVM a view-model is a lot richer than the DTO variants I’ve described above. In MVVM the view-models can have logic, event handlers, property change notifications etc., which will not fit “our” view-models.

Application service layer

Open the project where we left off in the previous post and add a new C# class library called DDDSkeletonNET.Portal.ApplicationServices. Remove Class1.cs.

Add a folder called ViewModels and insert a class called CustomerViewModel in it:

public class CustomerViewModel
{
	public string Name { get; set; }
	public string AddressLine1 { get; set; }
	public string AddressLine2 { get; set; }
	public string City { get; set; }
	public string PostalCode { get; set; }
	public int Id { get; set; }
}

Add another folder called ModelConversions with the following extension methods:

public static class ConversionHelper
{
	public static CustomerViewModel ConvertToViewModel(this Customer customer)
	{
		return new CustomerViewModel()
		{
			AddressLine1 = customer.CustomerAddress.AddressLine1
			,AddressLine2 = customer.CustomerAddress.AddressLine2
			,City = customer.CustomerAddress.City
			,Id = customer.Id
			,Name = customer.Name
			,PostalCode = customer.CustomerAddress.PostalCode
		};
	}

	public static IEnumerable<CustomerViewModel> ConvertToViewModels(this IEnumerable<Customer> customers)
	{
		List<CustomerViewModel> customerViewModels = new List<CustomerViewModel>();
		foreach (Customer customer in customers)
		{
			customerViewModels.Add(customer.ConvertToViewModel());
		}
		return customerViewModels;
	}
}

Make sure to set the namespace to DDDSkeletonNET.Portal so that the extension methods will be available without setting any extra using statements elsewhere in the project.

Add a new folder called Messaging and insert the following base classes for the service requests and responses:

public abstract class ServiceRequestBase
{
}
public abstract class ServiceResponseBase
{
	public ServiceResponseBase()
	{
		this.Exception = null;
	}
	
	public Exception Exception { get; set; }
}

The base request is empty at this point but we could add any properties common to all requests later. We want to preserve any caught exception in the responses which is why we have the Exception property.

Many request types will involve an ID: search, update, delete. Let’s create a base request class for those with integer IDs in the Messaging folder:

public abstract class IntegerIdRequest : ServiceRequestBase
{
	private int _id;

	public IntegerIdRequest(int id)
	{
		if (id < 1)
		{
			throw new ArgumentException("ID must be positive.");
		}
		_id = id;
	}

	public int Id { get { return _id; } }
}

We’ll expose the service calls in an interface. Add a new folder called Interfaces and insert the following ICustomerService interface:

public interface ICustomerService
{
	GetCustomerResponse GetCustomer(GetCustomerRequest getCustomerRequest);
	GetCustomersResponse GetAllCustomers();
	InsertCustomerResponse InsertCustomer(InsertCustomerRequest insertCustomerRequest);
	UpdateCustomerResponse UpdateCustomer(UpdateCustomerRequest updateCustomerRequest);
	DeleteCustomerResponse DeleteCustomer(DeleteCustomerRequest deleteCustomerRequest);
}

Insert a new folder called Customers within the Messaging folder. The request response objects look as follows:

public class DeleteCustomerRequest : IntegerIdRequest
{
	public DeleteCustomerRequest(int customerId) : base(customerId)
	{}
}
public class DeleteCustomerResponse : ServiceResponseBase
{}
public class GetCustomerRequest : IntegerIdRequest
{
	public GetCustomerRequest(int customerId) : base(customerId)
	{}
}
public class GetCustomerResponse : ServiceResponseBase
{
	public CustomerViewModel Customer { get; set; }
}
public class GetCustomersResponse : ServiceResponseBase
{
	public IEnumerable<CustomerViewModel> Customers { get; set; }
}
public class InsertCustomerRequest : ServiceRequestBase
{
	public CustomerPropertiesViewModel CustomerProperties { get; set; }
}

…where CustomerPropertiesViewModel looks as follows:

public class CustomerPropertiesViewModel
{
	public string Name { get; set; }
	public string AddressLine1 { get; set; }
	public string AddressLine2 { get; set; }
	public string City { get; set; }
	public string PostalCode { get; set; }
}
public class InsertCustomerResponse : ServiceResponseBase
{}
public class UpdateCustomerRequest : IntegerIdRequest
{
	public UpdateCustomerRequest(int customerId) : base(customerId)
	{}

	public CustomerPropertiesViewModel CustomerProperties { get; set; }
}
public class UpdateCustomerResponse : ServiceResponseBase
{}

You’ll need to add references to the Infrastructure and Domain layers for this to compile.

We have now the basis for the first concrete service. The next post will show how that can be implemented.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 6: the concrete Repository continued

Introduction

In the previous post we laid the foundation for the concrete domain-specific repositories. We implemented the IUnitOfWork and IUnitOfWorkRepository interfaces and created an abstract Repository class. It’s time to see how we can implement the Customer repository. We’ll continue where we left off in the previous post.

The concrete Customer repository

Add a new folder called Repositories in the Repository.Memory data access layer. Add a new class called CustomerRepository with the following stub:

public class CustomerRepository : Repository<Customer, int, DatabaseCustomer>, ICustomerRepository
{
         public CustomerRepository(IUnitOfWork unitOfWork, IObjectContextFactory objectContextFactory) :  base(unitOfWork, objectContextFactory)
	{}

	public override Customer FindBy(int id)
	{
		throw new NotImplementedException();
	}

	public override DatabaseCustomer ConvertToDatabaseType(Customer domainType)
	{
		throw new NotImplementedException();
	}

	public IEnumerable<Customer> FindAll()
	{
		throw new NotImplementedException();
	}
}

You’ll need to add a reference to the Domain layer for this to compile.

We derive from the abstract Repository class and declare that the object is represented by the Customer class in the domain layer and by the DatabaseCustomer in the data storage layer and has an id type int. We inherit the FindBy, the ConvertToDatabaseType and the FindAll methods. FindAll() is coming indirectly from the IReadOnlyRepository interface which is implemented by IRepository which in turn is implemented by ICustomerRepository. And where are the three methods of IRepository? Remember, that the Update, Insert and Delete methods have already been implemented in the Repository class so we don’t need to worry about them. Any time we create a new domain-specific repository, those methods have been taken care of.

Let’s implement the methods one by one. In the FindBy(int id) method we’ll need to consult the DatabaseCustomers collection first and get the DatabaseCustomer object with the incoming id. Then we’ll populate the Customer domain object from its database representation. The DatabaseCustomers collection can be retrieved using the IObjectContextFactory dependency of the base class. However, its backing field is private, so let’s add the following read-only property to Repository.cs:

public IObjectContextFactory ObjectContextFactory
{
	get
	{
		return _objectContextFactory;
	}
}

The FindBy(int id) method can be implemented as follows:

public override Customer FindBy(int id)
{
	DatabaseCustomer databaseCustomer = (from dc in ObjectContextFactory.Create().DatabaseCustomers
										 where dc.Id == id
										 select dc).FirstOrDefault();
	Customer customer = new Customer()
	{
		Id = databaseCustomer.Id
		,Name = databaseCustomer.CustomerName
		,CustomerAddress = new Address()
		{
			AddressLine1 = databaseCustomer.Address
			,AddressLine2 = string.Empty
			,City = databaseCustomer.City
			,PostalCode = "N/A"
		}
	};
	return customer;
}

So we populate the Customer object based on the properties of the DatabaseCustomer object. We don’t store the postal code or address line 2 in the DB so they are not populated. In a real life scenario we’d probably need to rectify this by extending the DatabaseCustomer table, but for now we don’t care. This example shows again the advantage of having complete freedom over the database and domain representations of a domain. You have the freedom of populating the domain object from its underlying database representation. You can even consult other database objects if needed as the domain object properties may be dispersed across 2-3 or even more database tables. The domain object won’t be concerned with such details. The domain and the database are completely detached so you don’t have to worry about changing either the DB or the domain representation.

Let’s factor out the population process to a separate method as it will be needed later:

private Customer ConvertToDomain(DatabaseCustomer databaseCustomer)
{
	Customer customer = new Customer()
	{
		Id = databaseCustomer.Id
		,Name = databaseCustomer.CustomerName
		,CustomerAddress = new Address()
		{
			AddressLine1 = databaseCustomer.Address
			,AddressLine2 = string.Empty
			,City = databaseCustomer.City
			,PostalCode = "N/A"
		}
	};
	return customer;
}

The update version of FindBy(int id) looks as follows:

public override Customer FindBy(int id)
{
	DatabaseCustomer databaseCustomer = (from dc in ObjectContextFactory.Create().DatabaseCustomers
										 where dc.Id == id
										 select dc).FirstOrDefault();
	if (databaseCustomer != null)
	{
		return ConvertToDomain(databaseCustomer);
	}
	return null;
}

The ConvertToDatabaseType(Customer domainType) method will be used when inserting and modifying domain objects:

public override DatabaseCustomer ConvertToDatabaseType(Customer domainType)
{
	return new DatabaseCustomer()
	{
		Address = domainType.CustomerAddress.AddressLine1
		,City = domainType.CustomerAddress.City
		,Country = "N/A"
		,CustomerName = domainType.Name
		,Id = domainType.Id
		,Telephone = "N/A"
	};
}

Nothing fancy here I suppose.

Finally FindAll simply retrieves all customers from the database and converts each to a domain type:

public IEnumerable<Customer> FindAll()
{
	List<Customer> allCustomers = new List<Customer>();
	List<DatabaseCustomer> allDatabaseCustomers = (from dc in ObjectContextFactory.Create().DatabaseCustomers
						   select dc).ToList();
	foreach (DatabaseCustomer dc in allDatabaseCustomers)
	{
		allCustomers.Add(ConvertToDomain(dc));
	}
	return allCustomers;
}

That’s it, there’s at present nothing more to add the CustomerRepository class. If you add any specialised queries in the ICustomerRepository interface they will need to be implemented here.

I’ll finish this post with a couple of remarks on EntityFramework.

Entity Framework

There is a way to implement the Unit of Work pattern in conjunction with the EntityFramework in a different, more simplified way. The database type is completely omitted from the structure leading to the following Repository abstract class declaration:

public abstract class Repository<DomainType, IdType> : IUnitOfWorkRepository where DomainType : IAggregateRoot

So there’s no trace of how the domain is represented in the database. There is a way in the EF designer to map the columns of a table to the properties of an object. You can specify the namespace of the objects created by EF – the .edmx file – like this:

1. Remove the code generation tool from the edmx file properties:

Remove EF code generation tool

2. Change the namespace of the project by right-clicking anywhere on the EF diagram and selecting Properties:

Change namespace in entity framework

Change that value to the namespace of your domain objects.

3. If the ID of a domain must be auto-generated by the database then you’ll need to set the StoreGeneratedPattern to Identity on that field:

EF identity generation

You can fine-tune the mapping by opening the .edmx file in a text editor. It is standard XML so you can edit it as long as you know what you are doing.

This way we don’t need to declare the domain type and the database type, we can work with only one type because it will be common to EF and the domain model. It is reasonable to follow down this path as it simplifies certain things, e.g. you don’t need the conversions between DB and domain types. However, there are some things to keep in mind I believe.

We inherently couple the domain object to its database counterpart. It may work with simple domain objects, but not with more complex objects where the database representation can be spread out in different tables. We may even face a different scenario: say we made a design mistake in the database structuring phase and built a table that represents more than one domain object. We may not have the possibility of rebuilding that table as it contains millions of rows and is deployed in a production environment. How do we then map 2 or more domain objects to the same database table in the EF designer? It won’t work, at least I don’t know any way how this can be solved. Also, mapping stored procedures to domain objects or domain object rules can be problematic. What’s more, you’ll need to mark those properties of your domains virtual where you want to allow lazy loading by the ORM framework like this:

public class Book
{
    public int Id { get; set; }
    public virtual Title Title { get; set; }
}

I think this breaks the persistence ignorance (PI) feature of proper POCO classes. We’re modifying the domain to give way for a persistence technology.

However, you may be OK with these limitations. Your philosophy may well differ from my rigid PI and POCO approach. It is certainly desirable that the database tables and domains are as close as possible, but you don’t always have this luxury with large legacy databases. If you start off with a completely empty database with no tables then EF and the code-first approach – where the data tables will be created for you based on your self-written object context implementation – can simplify the repository development process. However, keep in mind that database tables are a lot more rigid and resistant to change than your loosely coupled and layered code. Once the database has been put into production and you want to change the structure of a domain object you may run into difficulties as the corresponding database table may not be modified that easily.

In the next post we’ll discuss the application services layer.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 5: the concrete Repository

Introduction

In the previous post we laid the foundation for our repository layer. It’s now time to see how those elements can be implemented. The implemented data access layer will be a simple in-memory data storage solution. I could have selected something more technical, like EntityFramework but giving a detailed account on a data storage mechanism is not the target of these series. It would probably only sidetrack us too much. The things we take up in this post should suffice for you to implement an EF-based concrete repository.

This is quite a large topic and it’s easy to get lost so this post will be devoted to laying the foundation of domain-specific repositories. In the next post we’ll implement the Customer domain repository.

The Customer repository

We need to expose the operations that the consumer of the code is allowed to perform on the Customer object. Recall from the previous post that we have 2 interfaces: one for read-only entities and another for the ones where we allow all CRUD operations. We want to be able to insert, select, modify and delete Customer objects. The domain-specific repository interfaces must be declared in the Domain layer.

This point is important to keep in mind: it is only the data access abstraction we define in the Domain layer. It is up to the implementing data access mechanism to hide the implementation details. We do this in order to keep the Domain objects completely free of persistence logic so that they remain persistence-ignorant. You can persist the domains in the repository layer the way you want, it doesn’t matter, as long as those details do not bubble up to the other layers in any way, shape or form. Once you start referencing e.g. the DB object context in the web layer you have committed to use EntityFramework and a switch to another technology will be all the more difficult.

Insert the following interface in the Domain/Customer folder:

public interface ICustomerRepository : IRepository<Customer, int>
{
}

Right now it’s an empty interface as we don’t want to declare any new data access capability over and above the methods already defined in the IRepository interface. Recall that we included the FindAll() method in the IReadOnlyRepository interface which allows us to retrieve all entities from the data store. This may be dangerous to perform on an entity where we have millions of rows in the database. Therefore you may want to remove that method from the interface. However, if we only have a few customers then it may be OK and you could have the following customer repository:

public interface ICustomerRepository : IRepository<Customer, int>
{
	IEnumerable<Customer> FindAll();
}

The Repository layer

Insert a new C# class library called DDDSkeletonNET.Portal.Repository.Memory. Add a reference to the Infrastructure layer as we’ll need access to the abstractions defined there. We’ll first need to implement the Unit of Work. Add the following stub implementation to the Repository layer:

public class InMemoryUnitOfWork : IUnitOfWork
{
	public void RegisterUpdate(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
	{}

	public void RegisterInsertion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
	{}

	public void RegisterDeletion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
	{}

	public void Commit()
	{}
}

Let’s fill this in. As we don’t have any built-in solution to track the changes to our entities we’ll have to store the changes ourselves in in-memory collections. Add the following private fields to the class:

private Dictionary<IAggregateRoot, IUnitOfWorkRepository> _insertedAggregates;
private Dictionary<IAggregateRoot, IUnitOfWorkRepository> _updatedAggregates;
private Dictionary<IAggregateRoot, IUnitOfWorkRepository> _deletedAggregates;

We keep track of the changes in these dictionaries. Recall the function of the unit of repository: it will perform the actual data persistence.

We’ll initialise these objects in the unit of work constructor:

public InMemoryUnitOfWork()
{
	_insertedAggregates = new Dictionary<IAggregateRoot, IUnitOfWorkRepository>();
	_updatedAggregates = new Dictionary<IAggregateRoot, IUnitOfWorkRepository>();
	_deletedAggregates = new Dictionary<IAggregateRoot, IUnitOfWorkRepository>();
}

Next we implement the registration methods which in practice only means that we’re filling up these dictionaries:

public void RegisterUpdate(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
{
	if (!_updatedAggregates.ContainsKey(aggregateRoot))
	{
		_updatedAggregates.Add(aggregateRoot, repository);
	}
}

public void RegisterInsertion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
{
	if (!_insertedAggregates.ContainsKey(aggregateRoot))
	{
		_insertedAggregates.Add(aggregateRoot, repository);
	}
}

public void RegisterDeletion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository)
{
	if (!_deletedAggregates.ContainsKey(aggregateRoot))
	{
		_deletedAggregates.Add(aggregateRoot, repository);
	}
}

We only want to add those changes that haven’t been added before hence the ContainsKey guard clause.

In the commit method we ask the unit of work repository to persist those changes:

public void Commit()
{
	foreach (IAggregateRoot aggregateRoot in _insertedAggregates.Keys)
	{
		_insertedAggregates[aggregateRoot].PersistInsertion(aggregateRoot);
	}

	foreach (IAggregateRoot aggregateRoot in _updatedAggregates.Keys)
	{
		_updatedAggregates[aggregateRoot].PersistUpdate(aggregateRoot);
	}

	foreach (IAggregateRoot aggregateRoot in _deletedAggregates.Keys)
	{
		_deletedAggregates[aggregateRoot].PersistDeletion(aggregateRoot);
	}
}

At present we don’t care about transactions and rollbacks as a fully optimised implementation is not the main goal here. We can however extend the Commit method to accommodate transactions:

public void Commit()
{
    using (TransactionScope scope = new TransactionScope())
    {
         //foreach loops...
         scope.Complete();
    }
}

You’ll need to import the System.Transactions library for this to work.

So what does a unit of work repository implementation look like? We’ll define it as a base abstract class that each concrete repository must derive from. We’ll build up an example step by step. Add the following stub to the Repository layer:

public abstract class Repository<DomainType, IdType, DatabaseType> : IUnitOfWorkRepository where DomainType :  IAggregateRoot
{
	private readonly IUnitOfWork _unitOfWork;

	public Repository(IUnitOfWork unitOfWork)
	{
		if (unitOfWork == null) throw new ArgumentNullException("Unit of work");
		_unitOfWork = unitOfWork;
	}

	public void PersistInsertion(IAggregateRoot aggregateRoot)
	{
		
	}

	public void PersistUpdate(IAggregateRoot aggregateRoot)
	{
			
	}

	public void PersistDeletion(IAggregateRoot aggregateRoot)
	{
		
	}
}

There’s not much functionality in here yet, but it’s an important first step. The Repository abstract class implements IUnitOfWorkRepository so it will be the persistence workhorse of the data storage. It has three type parameters: DomainType, IdType and DatabaseType.

TypeId probably sounds familiar by now so I won’t dive into more details now. Then we distinguish between the domain type and the database type. The domain type will be the type of the domain class such as the Customer domain we’ve created. The database type will be the database representation of the same domain.

Why might we need a database type? It is not guaranteed that a domain object will have the same structure in the storage as in the domain layer. In the domain layer you have the freedom of adding, modifying and removing properties, rules etc. You may not have the same freedom in a relational database. If a data table is used by stored procedures and user-defined functions then you cannot just remove columns at will without breaking a lot of other DB logic. With the advent of NoSql databases, such as MongoDb or RavenDb, which allow a very loose and flexible data structure this requirement may change but at the time of writing this post relational databases are still the first choice for data storage. Usually as soon as a database is put into production and LIVE data starts to fill up the data tables with potentially hundreds of thousands of rows every day then the data table structure becomes a lot more rigid than your domain classes. Hence it can be a good idea to isolate the “domain” and “database” representations of a domain object. It is of course only the concrete repository layer that should know how a domain object is represented in the data storage.

Before we continue with the Repository class let’s simulate the database representation of the Customer class. This can be likened to the objects that the EF or Linq to SQL automation tools generate for you. Insert a new folder called Database in the Repository.Memory layer. Insert the following class into it:

public class DatabaseCustomer
{
	public int Id { get; set; }
	public string CustomerName { get; set; }
	public string Address { get; set; }
	public string Country { get; set; }
	public string City { get; set; }
	public string Telephone { get; set; }
}

This example shows an advantage with having separate domain and database representations. Recall that the Customer domain has an Address value object. In this example we imagine that there’s no separate Address table, we didn’t think of that when we designed the original database. As the database has been in use for some time it’s probably even too late to create a separate Address table so that we don’t interrupt the LIVE system.

However, we don’t care because we’ve allowed for this possibility by the type parameters. We’re free to construct and convert between domain and database objects in the concrete repository layer.

The next object we’ll need in the database is an imitation of the DB object context in EF and Linq to SQL. I realise that I may be talking too much about these specific ORM technologies but it probably suits the vast majority of data-driven .NET projects out there. Insert the following class into the Database folder:

public class InMemoryDatabaseObjectContext
{
	//simulation of database collections
	public List<DatabaseCustomer> DatabaseCustomers { get; set; }

	public InMemoryDatabaseObjectContext()
	{
		InitialiseDatabaseCustomers();
	}

	public void AddEntity<T>(T databaseEntity)
	{
		if (databaseEntity is DatabaseCustomer)
		{
                        DatabaseCustomer databaseCustomer = databaseEntity as DatabaseCustomer;
			databaseCustomer.Id = DatabaseCustomers.Count + 1;
			DatabaseCustomers.Add(databaseEntity as DatabaseCustomer);
		}
	}

	public void UpdateEntity<T>(T databaseEntity)
	{
		if (databaseEntity is DatabaseCustomer)
		{
			DatabaseCustomer dbCustomer = databaseEntity as DatabaseCustomer;
			DatabaseCustomer dbCustomerToBeUpdated = (from c in DatabaseCustomers where c.Id == dbCustomer.Id select c).FirstOrDefault();
			dbCustomerToBeUpdated.Address = dbCustomer.Address;
			dbCustomerToBeUpdated.City = dbCustomer.City;
			dbCustomerToBeUpdated.Country = dbCustomer.Country;
			dbCustomerToBeUpdated.CustomerName = dbCustomer.CustomerName;
			dbCustomerToBeUpdated.Telephone = dbCustomer.Telephone;
		}
	}

	public void DeleteEntity<T>(T databaseEntity)
	{
		if (databaseEntity is DatabaseCustomer)
		{
			DatabaseCustomer dbCustomer = databaseEntity as DatabaseCustomer;
			DatabaseCustomer dbCustomerToBeDeleted = (from c in DatabaseCustomers where c.Id == dbCustomer.Id select c).FirstOrDefault();
			DatabaseCustomers.Remove(dbCustomerToBeDeleted);
		}
	}

	private void InitialiseDatabaseCustomers()
	{
		DatabaseCustomers = new List<DatabaseCustomer>();
		DatabaseCustomers.Add(new DatabaseCustomer(){Address = "Main street", City = "Birmingham", Country = "UK", CustomerName ="GreatCustomer", Id = 1, Telephone = "N/A"});
		DatabaseCustomers.Add(new DatabaseCustomer() { Address = "Strandvägen", City = "Stockholm", Country = "Sweden", CustomerName = "BadCustomer", Id = 2, Telephone = "123345456" });
		DatabaseCustomers.Add(new DatabaseCustomer() { Address = "Kis utca", City = "Budapest", Country = "Hungary", CustomerName = "FavouriteCustomer", Id = 3, Telephone = "987654312" });
	}
}

First we have a collection of DB customer objects. In the constructor we initialise the collection values so that we don’t start with an empty one. Then we add insertion, update and delete methods for database type T to make it generic. These are somewhat analogous to the InsertOnSubmit, SubmitChanges and the DeleteOnSubmit methods from the Linq to SQL object context. The implementation itself is not too clever of course as we need to check the type but it’s OK for now. Perfection is not the goal in this part of the code, the automation tools will prepare a lot more professional code for you from the database. In the AddEntity we assign an ID based on the number of elements in the database. This is a simulation of the ID auto-assign feature of relational databases.

We’ll need access to this InMemoryDatabaseObjectContext in the repository layer. In a real system this object will be used intensively therefore access to it should be regulated. Creating a new object context for every single DB operation is not too clever so we’ll hide the access behind a thread-safe lazy singleton class. If you don’t know what these term mean, check out my post on the singleton design pattern. I won’t repeat the stuff that’s written there.

Insert the following abstract factory interface in the Database folder:

public interface IObjectContextFactory
{
	InMemoryDatabaseObjectContext Create();
}

The interface will be implemented by the following class:

public class LazySingletonObjectContextFactory : IObjectContextFactory
{
	public InMemoryDatabaseObjectContext Create()
	{
		return InMemoryDatabaseObjectContext.Instance;
	}
}

The InMemoryDatabaseObjectContext object does not have any Instance property yet, so add the following code to it:

public static InMemoryDatabaseObjectContext Instance
{
	get
	{
		return Nested.instance;
	}
}

private class Nested
{
	static Nested()
	{
	}
	internal static readonly InMemoryDatabaseObjectContext instance = new InMemoryDatabaseObjectContext();
}

If you don’t understand what this code does then make sure you read through the post on the singleton pattern.

We’ll need a reference to the abstract factory in the Repository object, so modify the private field declarations and the constructor as follows:

private readonly IUnitOfWork _unitOfWork;
private readonly IObjectContextFactory _objectContextFactory;

public Repository(IUnitOfWork unitOfWork, IObjectContextFactory objectContextFactory)
{
	if (unitOfWork == null) throw new ArgumentNullException("Unit of work");
	if (objectContextFactory == null) throw new ArgumentNullException("Object context factory");
	_unitOfWork = unitOfWork;
	_objectContextFactory = objectContextFactory;
}

At this point it’s also clear that we’ll need to be able to convert a domain type to a database type. This is best implemented in the concrete repository classes so it’s enough to add an abstract method in the Repository class:

public abstract DatabaseType ConvertToDatabaseType(DomainType domainType);

We can implement the Persist methods as follows:

public void PersistInsertion(IAggregateRoot aggregateRoot)
{
	DatabaseType databaseType = RetrieveDatabaseTypeFrom(aggregateRoot);
	_objectContextFactory.Create().AddEntity<DatabaseType>(databaseType);
}

public void PersistUpdate(IAggregateRoot aggregateRoot)
{
	DatabaseType databaseType = RetrieveDatabaseTypeFrom(aggregateRoot);
	_objectContextFactory.Create().UpdateEntity<DatabaseType>(databaseType);
}

public void PersistDeletion(IAggregateRoot aggregateRoot)
{
	DatabaseType databaseType = RetrieveDatabaseTypeFrom(aggregateRoot);
	_objectContextFactory.Create().DeleteEntity<DatabaseType>(databaseType);
}

private DatabaseType RetrieveDatabaseTypeFrom(IAggregateRoot aggregateRoot)
{
	DomainType domainType = (DomainType)aggregateRoot;
	DatabaseType databaseType = ConvertToDatabaseType(domainType);
	return databaseType;
}

We need to convert the incoming IAggregateRoot object to the database type as it is the database type that the DB understands. The DB type is its own representation of the domain so we need to talk to it that way.

So these are the persistence methods but we need to register the changes first. Add the following methods to Repository.cs:

public void Update(DomainType aggregate)
{
	_unitOfWork.RegisterUpdate(aggregate, this);
}

public void Insert(DomainType aggregate)
{
	_unitOfWork.RegisterInsertion(aggregate, this);
}

public void Delete(DomainType aggregate)
{
	_unitOfWork.RegisterDeletion(aggregate, this);
}

These operations are so common and repetitive that we can put them in this abstract base class instead of letting the domain-specific repositories implement them over and over again. All they do is adding the operations to the queue of the unit of work to be performed when Commit() is called. Upon Commit() the unit of work repository, i.e. the abstract Repository object will persist the changes using the in memory object context.

This model is relatively straightforward to change:

  • If you need a plain file-based storage mechanism then you might create a FileObjectContext class where you read to and from a file.
  • For EntityFramework and Linq to SQL you’ll use the built-in object context classes to keep track of and persist the changes
  • File-based NoSql solutions generally also have drivers for .NET – they can be used to implement a NoSql solution

So the possibilities are endless in fact. You can hide the implementation behind abstractions such as the IUnitOfWork interface or the Repository abstract class. It may well be that you need to add methods to the Repository class depending on the data storage technology you use but that’s perfectly acceptable. The concrete repository layer is…, well, concrete. You can dedicate it to a specific technology, just like the one we’re building here is dedicated to an in-memory storage mechanism. You can have several different implementations of the IUnitOfWork and IUnitOfWorkRepository interfaces and test them before you let your application go public. You can even mix and match the implementations:

  • Register the changes in a temporary file and commit them in NoSql
  • Register the changes in memory and commit them to the cache
  • Register the changes in cache and commit them to SQL Azure

Of course these combinations are very exotic but it shows you the flexibility behind all these abstractions. Don’t assume that the Repository class is a solution for ALL types of concrete unit of work. You’ll certainly have to modify it depending on the concrete data storage mechanism you work with. However, note the following points:

  • The rest of the application will not be concerned with the concrete data access implementation
  • The implementation details are well hidden behind this layer without them bubbling up to and permeating the other layers

So as long as the concrete implementations are hidden in the data access layer you’ll be fine.

There’s one last abstract method we’ll add to Repository.cs is the ubiquitous find-by-id method which we’ll delegate to the implementing classes:

public abstract DomainType FindBy(IdType id);

We’ll implement the CustomerRepository class in the next post.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 4: the abstract Repository

Introduction

In the previous post we created a very basic domain layer with the first domain object: Customer. We’ll now see what the data access layer could look like in an abstract form. This is where we must be careful not to commit the same mistakes as in the technology-driven example of the introductory post of this series.

We must abstract away the implementation details of the data access technology we use so that we can easily switch strategies later if necessary. We cannot let any technology-specific implementation bubble up from the data access layer. These details include the object context in EntityFramework, MongoClient and MongoServer in MongoDb .NET, the objects related to the file system in a purely file based data access solution etc., you probably get the idea. We must therefore make sure that no other layer will be dependent on the concrete data access solution.

We’ll first lay the foundations for abstracting away any type of data access technology and you may find it a “heavy” process with a steep learning curve. As these technologies come in many different shapes this is not the most trivial task to achieve. We can have ORM technologies like EntityFramework, file-based data storage like MongoDb, key-value style storage such as Azure storage, and it’s not easy to find a common abstraction that fits all of them. We’ll need to accommodate these technologies without hurting the ISP principle too much. We’ll follow a couple of well-established patterns and see how they can be implemented.

Aggregate root

We discussed aggregates and aggregate roots in this post. Recall that aggregates are handled as one unit where the aggregate root is the entry point, the “boss” of the aggregate. This implies that the data access layer should only handle aggregate roots. It should not accept objects that lie somewhere within the aggregate. We haven’t yet implemented any code regarding aggregate roots, but we can take a very simple approach. Insert the following empty interface in the Infrastructure.Common/Domain folder:

public interface IAggregateRoot
{
}

Conceptually it would probably be better to create a base class for aggregate roots, but we already have one for entities. As you know an object cannot derive from two base classes so we’ll indicate aggregate roots with this interface instead. At present it is simply an indicator interface with no methods, such as the ISerializable interface in .NET. We could add common properties or methods here but I cannot think of any right now.

After consulting the domain expert we decide that the Customer domain is an aggregate root: the root of its own Customer aggregate. Right now the Customer aggregate has only one member, the Customer entity, but that’s perfectly acceptable. Aggregate roots don’t necessarily consist of at least 2 objects. Let’s change the Customer domain object declaration as follows:

public class Customer : EntityBase<int>, IAggregateRoot

The rest of the post will look at how to abstract away the data access technology and some patterns related to that.

Unit of work

The first important concept within data access and persistence is the Unit of Work. The purpose of a Unit of Work is to maintain a list of objects that have been modified by a transaction. These modifications include insertions, updates and deletions. The Unit of Work co-ordinates the persistence of these changes and also checks for concurrency problems, such as the same object being modified by different threads.

This may sound a bit cryptic but if you’re familiar with EntityFramework or Linq to SQL then the object context in each technology is a good example for an implementation of a unit of work:

DbContext.AddObject("Customers", new Customer());
DbContext.SaveChanges();
DbContext.Customers.InsertOnSubmit(new Customer());
DbContext.SubmitChanges();

In both cases the DbContext, which takes the role of the Unit of Work, first registers the changes. The changes are not persisted until the SaveChanges() – EntityFramework – or the SubmitChanges() – Linq to SQL – method is called. Note that the DB object context is responsible for registering AND persisting the changes.

The Unit of Work pattern can be represented by the following interface:

public interface IUnitOfWork
{
	void RegisterUpdate(IAggregateRoot aggregateRoot);
	void RegisterInsertion(IAggregateRoot aggregateRoot);
	void RegisterDeletion(IAggregateRoot aggregateRoot);
	void Commit();
}

Notice that we have separated the registration and commit actions. In the case of EntityFramework and similar ORM technologies the object that registers and persists the changes will often be the same: the object context or a similar object.

However, this is not always the case. It is perfectly reasonable that you are not fond of these automation tools and want to use something more basic where you have the freedom of specifying how you track changes and how you persist them: you may register the changes in memory and persist them in a file. Or you may still have a lot of legacy ADO.NET where you want to move to a more modern layered architecture. ADO.NET lacks a DbContext object so you may have to solve the registration of changes in a different way.

For those scenarios we need to introduce another abstraction. Insert the following interface in a new folder called UnitOfWork in the Infrastructure layer:

public interface IUnitOfWorkRepository
{
	void PersistInsertion(IAggregateRoot aggregateRoot);
	void PersistUpdate(IAggregateRoot aggregateRoot);
	void PersistDeletion(IAggregateRoot aggregateRoot);
}

Insert a new interface called IUnitOfWork in the same folder:

public interface IUnitOfWork
{
	void RegisterUpdate(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository);
	void RegisterInsertion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository);
	void RegisterDeletion(IAggregateRoot aggregateRoot, IUnitOfWorkRepository repository);
	void Commit();
}

The above Unit of Work pattern abstraction can be extended as above to give way for the total separation between the registration and persistence of aggregate roots. Later on, as we see these elements at work and as you get more acquainted with the structure of the solution, you may decide that this is overkill and you want to go with the more simple solution, it’s up to you.

Repository pattern

The repository pattern is used to declare the possible data access related actions you want to expose for your aggregate roots. The most basic of those are CRUD operations: insert, update, delete, select. You can have other methods such as insert many, select by ID, select by id and date etc., but the CRUD operations are probably the most common across all aggregate roots.

It may be beneficial to divide those operations into two groups: read and write operations. You may have a read-only policy for some aggregate roots so you don’t need to expose all of those operations. For read-only aggregates you can have a read-only repository. Insert the following interface in the Domain folder of the Infrastructure project:

public interface IReadOnlyRepository<AggregateType, IdType> where AggregateType : IAggregateRoot
{
	AggregateType FindBy(IdType id);
	IEnumerable<AggregateType> FindAll();
}

Here again we only allow aggregate roots to be selected. Finding an object by its ID is such a basic operation that it just has to be included here. FindAll can be a bit more sensitive issue. If you have a data table with millions of rows then you may not want to expose this method as it will bog down your data server, so use it with care. It’s perfectly OK to omit this operation from this interface. You will be able to include this method in domain-specific repositories as we’ll see in the next post. E.g. if you want to enable the retrieval of all Customers from the database then you can expose this method in the CustomerRepository class.

The other data access operations can be exposed in an “actionable” interface. Insert the following interface in the folder of the Infrastructure project:

public interface IRepository<AggregateType, IdType> 
		: IReadOnlyRepository<AggregateType, IdType> where AggregateType 
		: IAggregateRoot
{
	void Update(AggregateType aggregate);
	void Insert(AggregateType aggregate);
	void Delete(AggregateType aggregate);
}

Keep in mind that these interfaces only expose the most common actions that are applicable across all aggregate roots. You’ll be able to specify domain-specific actions in domain-specific implementations of these repositories.

We’ll look at how these elements can be implemented for the Customer object in the next post.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 3: the Domain

Introduction

In the previous post we laid the theoretical foundation for our domains. Now it’s finally time to see some code. In this post we’ll concentrate on the Domain layer and we’ll also start building the Infrastructure layer.

Infrastructure?

Often the word infrastructure is taken to mean the storage mechanism, like an SQL database, or the physical layers of a system, such as the servers. However, in this case we mean something different. The infrastructure layer is a place for all sorts of cross-cutting concerns and objects that can be used in any project. They are not specific to any single project or domain. Examples: logging, file operations, security, caching, helper classes for Date and String operations etc. Putting these in a separate infrastructure layer helps if you want to employ the same logging, caching etc. policy across all your projects. You could put these within the project solution but then when you start your next project then you may need to copy and paste a lot of code.

Infrastructure

The Entities we discussed in the previous post will all derive from an abstract EntityBase class. We could put it directly into the domain layer. However, think of this class as the base for all entities across all your DDD projects where you put all common functionality for your domains.

Create a new blank solution in VS and call it DDDSkeletonNET.Portal. Add a new C# class library called DDDSkeletonNET.Infrastructure.Common. Remove Class1 and add a new folder called Domain. In that folder add a new class called EntityBase:

public abstract class EntityBase<IdType>
{
	public IdType Id { get; set; }

	public override bool Equals(object entity)
	{
		return entity != null
		   && entity is EntityBase<IdType>
		   && this == (EntityBase<IdType>)entity;
	}

	public override int GetHashCode()
	{
		return this.Id.GetHashCode();
	}

	public static bool operator ==(EntityBase<IdType> entity1, EntityBase<IdType> entity2)
	{
		if ((object)entity1 == null && (object)entity2 == null)
		{
			return true;
		}

		if ((object)entity1 == null || (object)entity2 == null)
		{
			return false;
		}

		if (entity1.Id.ToString() == entity2.Id.ToString())
		{
			return true;
		}

		return false;
	}

	public static bool operator !=(EntityBase<IdType> entity1, EntityBase<IdType> entity2)
	{
		return (!(entity1 == entity2));
	}
}

We only have one property at this point, the ID, whose type can be specified through the IdType type parameter. Often this will be an integer or maybe a GUID or even some auto-generated string. The rest of the code takes care of comparison issues so that you can compare two entities with the ‘==’ operator or the Equals method. Recall that entityA == entityB if an only if their IDs are identical, hence the comparison being based on the ID property.

Domains need to validate themselves when insertions or updates are executed so let’s add the following abstract method to EntityBase.cs:

protected abstract void Validate();

We can describe our business rules in many ways but the simplest format is a description in words. Add a class called BusinessRule to the Domain folder:

public class BusinessRule
{
	private string _ruleDescription;

	public BusinessRule(string ruleDescription)
	{
		_ruleDescription = ruleDescription;
	}

	public String RuleDescription
	{
		get
		{
			return _ruleDescription;
		}
	}
}

Coming back to EntityBase.cs we’ll store the list of broken business rules in a private variable:

private List<BusinessRule> _brokenRules = new List<BusinessRule>();

This list represents all the business rules that haven’t been adhered to during the object composition: the total price is incorrect, the customer name is empty, the person’s age is negative etc., so it’s all the things that make the state of the object invalid. We don’t want to save objects in an invalid state in the data storage, so we definitely need validation.

Implementing entities will be able to add to this list through the following method:

protected void AddBrokenRule(BusinessRule businessRule)
{
	_brokenRules.Add(businessRule);
}

External code will collect all broken rules by calling this method:

public IEnumerable<BusinessRule> GetBrokenRules()
{
	_brokenRules.Clear();
	Validate();
	return _brokenRules;
}

We first clear the list so that we don’t return any previously stored broken rules. They may have been fixed by then. We then run the Validate method which is implemented in the concrete domain classes. The domain will fill up the list of broken rules in that implementation. The list is then returned. We’ll see in a later post on the application service layer how this method can be used from the outside.

We’re now ready to implement the first domain object. Add a new C# class library called DDDSkeleton.Portal.Domain to the solution. Let’s make this easy for us and create the most basic Customer domain. Add a new folder called Customer and in it a class called Customer which will derive from EntityBase.cs. The Domain project will need to reference the Infrastructure project. Let’s say the Customer will have an id of type integer. At first the class will look as follows:

public class Customer : EntityBase<int>
{
	protected override void Validate()
	{
		throw new NotImplementedException();
	}
}

We know from the domain expert that every Customer will have a name, so we add the following property:

public string Name { get; set; }

Our first business rule says that the customer name cannot be null or empty. We’ll store these rule descriptions in a separate file within the Customer folder:

public static class CustomerBusinessRule
{
	public static readonly BusinessRule CustomerNameRequired = new BusinessRule("A customer must have a name.");
}

We can now implement the Validate() method in the Customer domain:

protected override void Validate()
{
	if (string.IsNullOrEmpty(Name))
	{
		AddBrokenRule(CustomerBusinessRule.CustomerNameRequired);
	}
}

Let’s now see how value objects can be used in code. The domain expert says that every customer will have an address property. We decide that we don’t need to track Addresses the same way as Customers, i.e. we don’t need to set an ID on them. We’ll need a base class for value objects in the Domain folder of the Infrastructure layer:

public abstract class ValueObjectBase
{
	private List<BusinessRule> _brokenRules = new List<BusinessRule>();

	public ValueObjectBase()
	{
	}

	protected abstract void Validate();

	public void ThrowExceptionIfInvalid()
	{
		_brokenRules.Clear();
		Validate();
		if (_brokenRules.Count() > 0)
		{
			StringBuilder issues = new StringBuilder();
			foreach (BusinessRule businessRule in _brokenRules)
			{
				issues.AppendLine(businessRule.RuleDescription);
			}

			throw new ValueObjectIsInvalidException(issues.ToString());
		}
	}

	protected void AddBrokenRule(BusinessRule businessRule)
	{
		_brokenRules.Add(businessRule);
	}
}

…where ValueObjectIsInvalidException looks as follows:

public class ValueObjectIsInvalidException : Exception
{
     public ValueObjectIsInvalidException(string message)
            : base(message)
        {}
}

You’ll recognise the Validate and AddBrokenRule methods. Value objects can of course also have business rules that need to be enforced. In the Domain layer add a new folder called ValueObjects. Add a class called Address in that folder:

public class Address : ValueObjectBase
{
	protected override void Validate()
	{
		throw new NotImplementedException();
	}
}

Add the following properties to the class:

public string AddressLine1 { get; set; }
public string AddressLine2 { get; set; }
public string City { get; set; }
public string PostalCode { get; set; }

The domain expert says that every Address object must have a valid City property. We can follow the same structure we took above. Add the following class to the ValueObjects folder:

public static class ValueObjectBusinessRule
{
	public static readonly BusinessRule CityInAddressRequired = new BusinessRule("An address must have a city.");
}

We can now complete the Validate method of the Address object:

protected override void Validate()
{
	if (string.IsNullOrEmpty(City))
	{
		AddBrokenRule(ValueObjectBusinessRule.CityInAddressRequired);
	}
}

Let’s add this new Address property to Customer:

public Address CustomerAddress { get; set; }

We’ll include the value object validation in the Customer validation:

protected override void Validate()
{
	if (string.IsNullOrEmpty(Name))
	{
		AddBrokenRule(CustomerBusinessRule.CustomerNameRequired);
	}
	CustomerAddress.ThrowExceptionIfInvalid();
}

As the Address value object is a property of the Customer entity it’s practical to include its validation within this Validate method. The Customer object doesn’t need to concern itself with the exact rules of an Address object. Customer effectively tells Address to go and validate itself.

We’ll stop building our domain layer now. We could add several more domain objects but let’s keep things as simple as possible so that you will be able to view the whole system without having to track too many threads. We now have an example of an entity, a value object and a couple of basic validation rules. We have missed how aggregate roots play a role in code but we’ll come back to that in the next post. You probably want to see how the repository, service and UI layers can be wired together.

We’ll start building the repository layer in the next post.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 2: DDD basics

Introduction

In the previous post we discussed how a traditional data access based layering can cause grief and sorrow. We’ll now move on to a better design and we’ll start with – what else – the domain layer. It is the heart of software, as Eric Evans put it, which should be the centre of the application. It should reflect our business entities and rules as close as possible. It will be an ever-evolving layer where you add, modify and remove bits of code as you get to know the domain better and better. The initial model will grow to a deep model with experimenting, brainstorming and knowledge crunching.

Before we dive into any code we have to discuss a couple of important terms. The domain layer is so important that an entire vocabulary has evolved around it. I promise to get to some real C# code after the theory.

Entities and value objects

Domain objects can be divided into entities and value objects.

An entity is an object with a unique ID. This unique ID is the most important property of an entity: it helps distinguish between two otherwise identical objects. We can have two people with the same name but if their IDs are different then we’re talking about two different people. Also, a person can change his/her name, but if the ID is the same then we know we’re talking about the same person. An ID is typically some integer or a GUID or some other string value randomly generated based on some properties of the object. Once the entity has been persisted, its ID should never change otherwise we lose track of it. In Sweden every legal person and company receives a personal registration number. Once you are given this ID it cannot be changed. From the point of view of the Swedish authorities this ID is my most important “property”. EntityA == EntityB if and only if EntityA.ID == EntityB.ID. Entities should ideally remain very clean POCO objects without any trace of technology-specific code. You may be tempted to add technology specific elements such s MVC attributes in here, e.g. [Required] or [StringLength(80)]. DON’T DO THAT. EVER! We’ll see later where they belong.

A value object on the other hand lacks any sort of ID. E.g. an office building may have 100 windows that look identical. Depending on your structure of the building you may not care which exact window is located where. Just grab whichever and mount it in the correct slot. In this case it’s futile to assign an ID to the window, you don’t care which one is which. Value objects are often used to group certain properties of an Entity. E.g. a Person entity can have an Address property which includes other properties such as Street and City. In your business an Address may well be a value object. In other cases, such as a postal company business, an Address will almost certainly be an Entity. A value object can absolutely have its own logic. Note that if you save value objects in the database it’s reasonable to assume that they will be assigned an automatic integer ID, but again – that’s an implementation detail that only concerns how our domain is represented in the storage, the domain layer doesn’t care, it will ignore the ID assigned by the data storage mechanism.

Sometimes it’s not easy to decide whether a specific object is an entity or a value object. If two objects have identical properties but we still need to distinguish between them then it’s most likely an entity with a unique ID. If two objects have identical properties and we don’t care which one is which: a value object. If the same object can be shared, e.g. the same Name can be attached to many Person objects: most likely a value object.

Aggregates and aggregate roots

Objects are seldom “independent”: they have other objects attached to them or are part of a larger object graph themselves. A Person object may have an Address object which in turn may have other Person objects if more than one person is registered there. It can be difficult to handle in software: if we delete a Person, then do we delete the Address as well? Then other Person objects registered there may be “homeless”. We can leave the Address in place, but then the DB may be full of empty, redundant addresses. Objects may therefore have a whole web of interdependency. Where does an object start and end? Where are its boundaries? There’s clearly a need to organise the domain objects into logical groups, so-called aggregates. An aggregate is a group of associated objects that are treated as a unit for the purpose of data changes. Every aggregate has a root and clear boundaries. The root is the only member of an aggregate that outside objects are allowed to hold references to.

Example

Take a Car object: it has a unique ID, e.g. the registration number to distinguish it from other cars. Each tire must also be measured somehow, but it’s unlikely that they will be treated separately from the car they belong to. It’s also unlikely that we make a DB search for a tire to see which car it belongs to. Therefore the Car is an aggregate root whose boundary includes the tires. Therefore consumers of the Car object should not be able to directly change the tire settings of the car. You may go as far as hiding the tires entirely within the Car aggregate and only let outside callers “get in touch with them” through methods such as “MoveCarForward”. The internal logic of the Car object may well modify the tires’ position, but to the outside world that is not visible.

An engine will probably also have a serial number and may be tracked independently of the car. It may happen that the engine is the root of its own aggregate and doesn’t lie within the boundaries of the Car, it depends on the domain model of the application.

Car aggregate root

An important term in conjunction with Aggregates is invariants. Invariants are consistency rules that must be maintained whenever data changes, so they represent business rules.

General guidelines for constructing the domain layer

Make sure that you allocate the best and most experienced developers available at your company to domain-related tasks. Often experienced programmers spend their time on exploring and testing new technologies, such as WebSockets. Entry-level programmers are then entrusted to construct the domain layer as technologically it’s not too difficult: write abstract and concrete classes, interfaces, methods, properties, that’s it really. Every developer should be able to construct these elements after completing a beginners’ course in C#. So people tend to think that this is an entry-level task. However, domain modelling and writing code in a loosely coupled and testable manner are no trivial tasks to accomplish.

Domains are the most important objects in Domain Driven Design. They are the objects that describe our business. They are so central that you should do your utmost to keep them clean, well organised and testable. It is worth building a separate Domain layer even during the mockup phase of a new project, even if you ignore the other layers. If you need to allocate time to maintain each layer within your solution then this layer should be allocated the most. This is important to keep in mind because you know how it usually goes with test projects – if management thinks it’s good then you often need to continue to build upon your initial design. There’s no time to build a better application because suddenly you are given a tight deadline. Then you’re stuck with a domain-less solution.

Domain objects should always be the ones that take priority. If you have to change the domain object to make room for some specific technology, then you’re taking the wrong direction. It is the technology that must fit the domain objects, not vice versa.

Domain objects are also void of persistence-related code making them technology agnostic or persistence ignorant. A Customer object will not care how it is stored in a database table, that is an irrelevant implementation detail.

Domain objects should be POCO and void of technology-specific attributes, such as the ones available in standard MVC, e.g. [Required]. Domain objects should be clear of all such ‘pollution’. Also, all logic specific to a domain must be included in the domain. These business rules contain validation rules as well, such as “the quantity should not be negative” or “a customer must not buy more than 5 units of this product”. Resist the temptation to write external files, such as CustomerHelper etc. where you put domain-specific logic, that’s not right. A domain is responsible to implement its specific domain logic. Example: in a banking application we may have an Account domain with a rule saying that the balance cannot be negative. You must include this validation logic within Account.cs, not in some other helper class. If the Account class has a complex validation logic then it’s fine to factor out the IMPLEMENTATION of the validation. i.e. the method body to a separate helper class. However, having a helper class that totally encapsulates domain validation and other domain-specific logic is the incorrect approach. Consumers of the domain will not be aware that there is some external class that they need to call. If domain specific logic is placed elsewhere then not even a visual inspection of the class file will reveal the domain logic to the consumer.

Make sure to include logic specific to a single domain object. Example: say you have an object called Observation with two properties of type double: min and max. If you want to calculate the average then Observation.cs will have a method called CalculateAverage and return the average of min and max, i.e. it performs logic on the level of a single domain object.

In the next post we’ll look at how these concepts can be written in code.

View the list of posts on Architecture and Patterns here.

A model .NET web service based on Domain Driven Design Part 1: introduction

Introduction

Layered architecture has been the norm in enterprise projects for quite some time now. We layer our solutions into groups of responsibilities: UI, services, data access, infrastructure and others. We can connect these layers by setting references to them from the consuming layer. There are various ways to set these layers and references. The choices you make here depend on the project type – Web, WPF, WCF, games etc. – you’re working on and the philosophy you follow.

On the one hand you may be very technology driven and base your choices primarily on the technologies available to build your project: Entity Framework, Ajax, MVC etc. Most programmers are like that and that’s natural. It’s a safe bet that you’ve become a developer because you’re interested in computers and computer-related technologies like video games and the Internet. It’s exciting to explore and test new technologies as they emerge: .NET, ORM, MVC, WebAPI, various JavaScript libraries, WebSockets, HighCharts, you name it. In fact, you find programming so enthralling that you would keep doing it for free, right? Just don’t tell your boss about it in the next salary review… With this approach it can happen in a layered architecture that the objects in your repository layer, i.e. the data access layer, such as an SQL Server database, will receive the top focus in your application. This is due to the popular ORM technologies available in .NET: EntityFramework and Linq to SQL. They generate classes based on some database representation of your domain objects. All other layers will depend on the data access layer. We’ll see an example of this below.

On the other hand you may put a different part of the application to the forefront: the domain. What is the domain? The domain describes your business. It is a collection of all concepts and logic that drive your enterprise. If you follow this type of philosophy, which is the essence of Domain Driven Design (DDD), then you give the domain layer the top priority. It will be the most important ingredient of the application. All other layers will depend on the domain layer. The domain layer will be an entirely independent one that can function on its own. The most important questions won’t be “How do we solve this with WebSockets?” or “How does it work with AngularJs?” but rather like these ones:

  • How does the Customer fit into our domain?
  • Have we modelled the Product domain correctly?
  • Is this logic part of the Product or the OrderItem domain?
  • Where do we put the pricing logic?

Technology-related questions will suddenly become implementation details where the exact implementation type can vary. We don’t really care which graph package we use: it’s an implementation detail. We don’t really care which ORM technology we use: it’s an implementation detail.

Of course technology-related concerns will be important aspects of the planning phase, but not as important as how to correctly represent our business in code.

In a technology-driven approach it can easily happen that the domain is given low priority and the technology choices will directly affect the domain: “We have to change the Customer domain because its structure doesn’t fit Linq to SQL”. In DDD the direction is reversed: “An important part of our Customer domain structure has changed so we need to update our Linq to SQL classes accordingly.”. A change in the technology should never force a change in your domain. Your domain is an independent entity that only changes if your business rules change or you discover that your domain logic doesn’t represent the true state of things.

Keep in mind that it is not the number of layers that makes your solution “modern”. You can easily layer your application in a way that it becomes a tightly coupled nightmare to work with and – almost – impossible to extend. If you do it correctly then you will get an application that is fun and easy to work with and is open to all sorts of extensions, even unanticipated ones. This last aspect is important to keep in mind. Nowadays customers’ expectations and wishlists change very often. You must have noticed that the days of waterfall graphs, where the the first version of the application may have rolled out months or even years after the initial specs were written, are over. Today we have a product release every week almost where we work closely with the customer on every increment we build in. Based on these product increments the customer can come with new demands and you better be prepared to react fast instead of having to rewrite large chunks of your code.

Goals of this series

I’ll start with stating what’s definitely not the goal: give a detailed account on all details and aspects of DDD. You can read all about DDD by its inventor Eric Evans in this book. It’s a book that discusses all areas of DDD you can think of. It’s no use regurgitating all of that in these posts. Also, building a model application which uses all concepts from DDD may easily grow into full-fledged enterprise application, such as SAP.

Instead, I’ll try to build the skeleton of a .NET solution that takes the most important ideas from DDD. The result of this skeleton will hopefully be a solution that you can learn from and even tweak it and use in your own project. Don’t feel constrained by this specific implementation – feel free to change it in a way that fits your philosophy.

However, if you’re completely new to DDD then you should still benefit from these posts as I’ll provide explanations for those key concepts. I’ll try to avoid giving you too much theory and rather concentrate on code but some background must be given in order to explain the key ideas. We’ll need to look at some key terms before providing any code. I’ll also refer to ideas from SOLID here and there – if you don’t understand this term then start here.

Also, it would be great to build the example application with TDD, but that would add a lot of noise to the main discussion. If you don’t know what TDD means, start here.

The ultimate goal is to end up with a skeleton solution with the following layers:

  • Infrastructure: infrastructure services to accommodate cross-cutting concerns
  • Data access (repository): data access and persistence technology layer, such as EntityFramework
  • Domain: the domain layer with our business entities and logic, the centre of the application
  • Application services: thin layer to provide actions for the consumer(s) such as a web or desktop app
  • Web: the ultimate consumer of the application whose only purpose is to show data and accept requests from the user

…where the layers communicate with each other in a loosely coupled manner through abstractions. It should be easy to replace an implementation with another. Of course writing a new version of – say – the repository layer is not a trivial task and layering won’t make it easier. However, the switch to the new implementation should go with as little pain as possible without the need to modify any of the other layers.

The Web layer implementation will actually be a web service layer using the Web API technology so that we don’t need to waste time on CSS and HTML. If you don’t know what Web API is about, make sure you understand the basics from this post.

UPDATE: the skeleton application is available for download on GitHub here.

Doing it wrong

Let’s take an application that is layered using the technology-driven approach. Imagine that you’ve been tasked with creating a web application that lists the customers of the company you work for. You know that layered architectures are the norm so you decide to build these 3 layers:

  • User Interface
  • Domain logic
  • Data access

You know more or less what the Customer domain looks like so you create the following table in SQL Server:

Customer table in SQL Server

Then you want to use the ORM capabilities of Visual Studio to create the Customer class for you using the Entity Framework Data Model Wizard. Alternatively you could go for a Linq to SQL project type, it doesn’t make any difference to our discussion. At this point you’re done with the first layer in your project, the data access one:

Entity Framework model context

EntityFramework has created a DbContext and the Customer class for us.

Next, you know that there’s logic attached to the Customer domain. In this example we want to refine the logic of what price reduction a customer receives when purchasing a product. Therefore you implement the Domain Logic Layer, a C# class library. To handle Customer-related queries you create a CustomerService class:

public class CustomerService
{
	private readonly ModelContext _objectContext;

	public CustomerService()
	{
		_objectContext = new ModelContext();
	}

	public IEnumerable<Customer> GetCustomers(decimal totalAmountSpent)
	{
		decimal priceDiscount = totalAmountSpent > 1000M ? .9m : 1m;
		List<Customer> allCustomers = (from c in _objectContext.Customers select c).ToList();
		return from c in allCustomers select new Customer()
		{
			Address = c.Address
			, DiscountGiven = c.DiscountGiven * priceDiscount
			, Id = c.Id
			, IsPreferred = c.IsPreferred
			, Name = c.Name
		};
	}
}

As you can see the final price discount depends also on the amount of money spent by the customer, not just the value currently stored in the database. So, that’s some extra logic that is now embedded in this CustomerService class. If you’re familiar with SOLID then the ‘_objectContext = new ModelContext();’ part immediately raises a warning flag for you. If not, then make sure to read about the Dependency Inversion Principle.

In order to make this work I had to add a library reference from the domain logic to the data access layer. I also had to add a reference to the EntityFramework library due to the following exception:

The type ‘System.Data.Entity.DbContext’ is defined in an assembly that is not referenced. You must add a reference to assembly ‘EntityFramework, Version=4.4.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089’.

So I import the EntityFramework library into the DomainLogic layer to make the project compile.

At this point I have the following projects in the solution:

Data access and domain logic layer

The last layer to add is the UI layer, we’ll go with an MVC solution. The Index() method which lists all customers might look like the following:

public class HomeController : Controller
{
	public ActionResult Index()
	{
		decimal totalAmountSpent = 1000; //to be fetched with a different service, ignored here
		CustomerService customerService = new CustomerService();
		IEnumerable<Customer> customers = customerService.GetCustomers(totalAmountSpent);
		return View(customers);
	}	
}

In order to make this work I had to reference both the DomainLogic AND the DataAccess projects from the UI layer. I now have 3 layers in the solution:

DDD three layers

So I press F5 and… …get an exception:

“No connection string named ‘ModelContext’ could be found in the application config file.”

This exception is thrown at the following line within CustomerService.GetCustomers:

List<Customer> allCustomers = (from c in _objectContext.Customers select c).ToList();

The ModelContext object implicitly expects that a connection string called ModelContext is available in the web.config file. It was originally inserted into app.config of the DataAccess layer by the EntityFramework code generation tool. So a quick fix is to copy the connection string from this app.config to web.config of the web UI layer:

<connectionStrings> 
    <add name="ModelContext" connectionString="metadata=res://*/Model.csdl|res://*/Model.ssdl|res://*/Model.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=mycomputer;initial catalog=Test;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework&quot;" providerName="System.Data.EntityClient" />
</connectionStrings>

Evaluation

Have we managed to build a “modern” layered application? Well, we certainly have layers, but they are tightly coupled. Both the UI and the Logic layer are strongly dependent on the DataAccess layer. Examples of tight coupling:

  • The HomeController/Index method constructs a CustomerService object
  • The HomeController/Index directly uses the Customer object from the DataAccess layer
  • The CustomerService constructs a new ModelContext object
  • The UI layer is forced to include an EntityFramework-specific connection string in its config file

At present we have the following dependency graph:

DDD wrong dependency graph

It’s immediately obvious that the UI layer can easily bypass any logic and consult the data access layer directly. It can read all customers from the database without applying the extra logic to determine the price discount.

Is it possible to replace the MVC layer with another UI type, say WPF? It certainly is, but it does not change the dependency graph at all. The WPF layer would also need to reference both the Logic and the DataAccess layers.

Is it possible to replace the data access layer? Say you’re done with object-relational databases and want to switch to file-based NoSql solutions such as MongoDb. Or you might want to move to a cloud and go with an Azure key-value type of storage. We don’t need to go as far as entirely changing the data storage mechanism. What if you only want to upgrade your technology Linq to SQL to EntityFramework? It can be done, but it will be a difficult process. Well, maybe not in this small example application, but imagine a real-world enterprise app with hundreds of domains where the layers are coupled to such a degree with all those references that removing the data access layer would cause the application to break down immediately.

The ultimate problem is that the entire domain model is defined in the data access layer. We’ve let the technology – EntityFramework – take over and it generated our domain objects based on some database representation of our business. It is an entirely acceptable solution to let an ORM technology help us with programming against a data storage mechanism in a strongly-typed object-oriented fashion. Who wants to work with DataReaders and SQL commands in a plain string format nowadays? However, letting some data access automation service take over our core business and permeate the rest of the application is far from desirable. The CustomerService constructor tightly couples the DomainLogic to EntityFramework. Since we need to reference the data access layer from within our UI even the MVC layer is coupled to EntityFramework. In case we need to change the storage mechanism or the domain logic layers we potentially have a considerable amount of painful rework to be done where you have to manually overwrite e.g. the Linq to SQL object context references to EntityFramework object context ones.

We have failed miserably in building a loosely coupled, composable application whose layers have a single responsibility. The domain layer, as stated in the introduction, should be the single most important layer in your application which is independent of all other layers – possibly with the exception of infrastructure services which we’ll look at in a later post in this series. Instead, now it is a technological data access layer which rules and where the domain logic layer is relegated to a second-class citizen which can be circumvented entirely.

The domain is at the heart of our business. It exists regardless of any technological details. If we sell cars, then we sell cars even if the car objects are stored in an XML file or in the cloud. It is a technological detail how our business is represented in the data storage. It is something that the Dev department needs to be concerned with. The business rules and domains have a lot higher priority than that: it is known to all departments of the company.

Another project type where you can easily confuse the roles of each layer is ASP.NET WebForms with its code behind. It’s very easy to put a lot of logic, database calls, validation etc. into the code-behind file. You’ll soon end up with the Smart UI anti-pattern where the UI is responsible for all business aspects of your application. Such a web application is difficult to compose, unit test and decouple. This doesn’t mean that you must forget WebForms entirely but use it wisely. Look into the Model-View-Presenter pattern, which is sort of MVC for WebForms, and then you’ll be fine.

So how can we better structure our layers? That will be the main topic of the series. However, we’ll need to lay the foundation for our understanding with some theory and terms which we’ll do in the next post.

View the list of posts on Architecture and Patterns here.

SOLID design principles in .NET: the Dependency Inversion Principle Part 5, Hello World revisited

Introduction

I realise that the previous post should have been the last one on the Dependency Inversion Principle but I decided to add one more, albeit a short one. It can be beneficial to look at one more example where we take a very easy starting point and expand it according to the guidelines we’ve looked at in this series.

The main source in the series on Dependency Injection is based on the work of Mark Seemann and his great book on DI.

Demo

The starting point of the exercise is the good old one-liner Hello World programme:

static void Main(string[] args)
{
	Console.WriteLine("Hello world");
}

Now that we’re fluent in DIP and SOLID we can immediately see a couple of flaws with this solution:

  • We can only write to the Console – if we want to write to a file then we’ll have to modify Main
  • We can only print Hello world to the console – we have to manually overwrite this bit of code if we want to print something else
  • We cannot easily extend this application in a sense that it lacks any seams that we discussed before – what if we want to add logging or security checks?

Let’s try to rectify these shortcomings. We’ll tackle the problem of message printing first. The Adapter pattern solves the issue by abstracting away the Print operation in an interface:

public interface ITextWriter
{
	void WriteText(string text);
}

We can then implement the Console-based solution as follows:

public class ConsoleTextWriter : ITextWriter
{
	public void WriteText(string text)
	{
		Console.WriteLine(text);
	}
}

Next let’s find a solution for collecting what the text writer needs to output. We’ll take the same approach and follow the adapter pattern:

public interface IMessageCollector
{
	string CollectMessageFromUser();
}

…with the corresponding Console-based implementation looking like this:

public class ConsoleMessageCollector : IMessageCollector
{
	public string CollectMessageFromUser()
	{
		Console.Write("Type your message to the world: ");
		return Console.ReadLine();
	}
}

These loose dependencies must be injected into another object, let’s call it PublicMessage:

public class PublicMessage
{
	private readonly IMessageCollector _messageCollector;
	private readonly ITextWriter _textWriter;

	public PublicMessage(IMessageCollector messageCollector, ITextWriter textWriter)
	{
		if (messageCollector == null) throw new ArgumentNullException("Message collector");
		if (textWriter == null) throw new ArgumentNullException("Text writer");
		_messageCollector = messageCollector;
		_textWriter = textWriter;
	}

	public void Shout()
	{
		string message = _messageCollector.CollectMessageFromUser();
		_textWriter.WriteText(message);
	}
}

You’ll realise some of the most basic techniques we’ve looked at in this series: constructor injection, guard clause, readonly private backing fields.

We can use these objects from Main as follows:

static void Main(string[] args)
{
	IMessageCollector messageCollector = new ConsoleMessageCollector();
	ITextWriter textWriter = new ConsoleTextWriter();
	PublicMessage publicMessage = new PublicMessage(messageCollector, textWriter);
	publicMessage.Shout();

	Console.ReadKey();
}

Now we’re free to inject any implementation of those interfaces: read from a database and print to file; read from a file and print to an email; read from the console and print to some web service. The PublicMessage class won’t care, it’s oblivious of the concrete implementations.

This solution is a lot more extensible. We can use the decorator pattern to add functionality to the text writer. Let’s say we want to add logging to the text writer through the following interface:

public interface ILogger
{
	void Log();
}

We can have some default implementation:

public class DefaultLogger : ILogger
{
	public void Log()
	{
		//implementation ignored
	}
}

We can wrap the text printing functionality within logging as follows:

public class LogWriter : ITextWriter
{
	private readonly ILogger _logger;
	private readonly ITextWriter _textWriter;

	public LogWriter(ILogger logger, ITextWriter textWriter)
	{
		if (logger == null) throw new ArgumentNullException("Logger");
		if (textWriter == null) throw new ArgumentNullException("TextWriter");
		_logger = logger;
		_textWriter = textWriter;
	}

	public void WriteText(string text)
	{
		_logger.Log();
		_textWriter.WriteText(text);
	}
}

In Main you can have the following:

static void Main(string[] args)
{
	IMessageCollector messageCollector = new ConsoleMessageCollector();
	ITextWriter textWriter = new LogWriter(new DefaultLogger(), new ConsoleTextWriter());
	PublicMessage publicMessage = new PublicMessage(messageCollector, textWriter);
	publicMessage.Shout();

	Console.ReadKey();
}

Notice that we didn’t have to do anything to PublicMessage. We passed in the interface dependencies as before and now we have the logging function included in message writing. Also, note that Main is tightly coupled to a range of objects, but it is acceptable in this case. We construct our objects in the entry point of the application, i.e. the composition root which is the correct place to do that. We don’t new up any dependencies within PublicMessage.

This was of course a very contrived example. We expanded the original code to a lot more complex solution with a lot higher overhead. However, real life applications, especially enterprise ones are infinitely more complicated where requirements change a lot. Customers are usually not sure what they want and wish to include new and updated features in the middle of the project. It’s vital for you as a programmer to be able to react quickly. Enabling loose coupling like that will make your life easier by not having to change several seemingly unrelated parts of your code.

View the list of posts on Architecture and Patterns here.

SOLID design principles in .NET: the Dependency Inversion Principle Part 4, Interception and conclusions

Introduction

I briefly mentioned the concept of Interception in the this post. It is a technique that can help you implement cross-cutting concerns such as logging, tracing, caching and other similar activities. Cross-cutting concerns include actions that are not strictly related to a specific domain but can potentially be called from many different objects. E.g. you may want to cache certain method results pretty much anywhere in your application so potentially you’ll need an ICacheService dependency in many places. In the post mentioned above I went through a possible DI pattern – ambient context – to implement such actions with all its pitfalls.

If you’re completely new to these concepts make sure you read through all the previous posts on DI in this series. I won’t repeat what was already explained before.

The idea behind Interception is quite simple. When a consumer calls a service you may wish to intercept that call and execute some action before and/or after the actual service is invoked.

It happens occasionally that I do the shopping on my way home from work. This is a real life example of interception: the true purpose of my action is to get home but I “intercept” that action with another one, namely buying some food. I can also do the shopping when I pick up my daughter from the kindergarten or when I want to go for a walk. So I intercept the main actions PickUpFromKindergarten() and GoForAWalk() with the shopping action because it is convenient to do so. The Shopping action can be injected into several other actions so in this case it may be considered as a Cross-Cutting Concern. Of course the shopping activity can be performed in itself as the main action, just like you can call a CacheService directly to cache something, in which case it too can be considered as the main action.

The main source in the series on Dependency Injection is based on the work of Mark Seemann and his great book on DI.

The problem

Say you have a service that looks up an object with an ID:

public interface IProductService
{
	Product GetProduct(int productId);
}
public class DefaultProductService : IProductService
{
	public Product GetProduct(int productId)
	{
		return new Product();
	}
}

Say you don’t want to look up this product every time so you decide to cache the result for 10 minutes.

Possible solutions

Total lack of DI

The first “solution” is to directly implement caching within the GetProduct method. Here I’m using the ObjectCache object located in the System.Runtime.Caching namespace:

public Product GetProduct(int productId)
{
	ObjectCache cache = MemoryCache.Default;
	string key = "product|" + productId;
	Product p = null;
	if (cache.Contains(key))
	{
		p = (Product)cache[key];
	}
	else
	{
		p = new Product();
		CacheItemPolicy policy = new CacheItemPolicy();
		DateTimeOffset dof = DateTimeOffset.Now.AddMinutes(10);
		policy.AbsoluteExpiration = dof;
		cache.Add(key, p, policy);
	}
	return p;
}

We check the cache using the cache key and retrieve the Product object if it’s available. Otherwise we simulate a database lookup and put the Product object in the cache with an absolute expiration of 10 minutes.

If you’ve read through the posts on DI and SOLID then you should know that this type of code has numerous pitfalls:

  • It is tightly coupled to the ObjectCache class
  • You cannot easily specify a different caching strategy – if you want to increase the caching time to 20 minutes then you’ll have to come back here and modify the method
  • The method signature does not tell anything to the caller about caching, so it violates the idea of an Intention Revealing Interface mentioned before
  • Therefore the caller will need to intimately know the internals of the GetProduct method
  • The method is difficult to test as it’s impossible to abstract away the caching logic. The test result will depend on the caching mechanism within the code so it will be inconclusive

Nevertheless you have probably encountered this style of coding quite often. There is nothing stopping you from writing code like that. It’s quick, it’s dirty, but it certainly works.

As an attempt to remedy the situation you can factor out the caching logic to a service:

public class SystemRuntimeCacheStorage
{
	public void Remove(string key)
	{
		ObjectCache cache = MemoryCache.Default;
		cache.Remove(key);
	}

	public void Store(string key, object data)
	{
		ObjectCache cache = MemoryCache.Default;
		cache.Add(key, data, null);
	}

	public void Store(string key, object data, DateTime absoluteExpiration, TimeSpan slidingExpiration)
	{
		ObjectCache cache = MemoryCache.Default;
		var policy = new CacheItemPolicy
		{
			AbsoluteExpiration = absoluteExpiration,
			SlidingExpiration = slidingExpiration
		};

		if (cache.Contains(key))
		{
			cache.Remove(key);
		}
		cache.Add(key, data, policy);
	}

	public T Retrieve<T>(string key)
	{
		ObjectCache cache = MemoryCache.Default;
		return cache.Contains(key) ? (T)cache[key] : default(T);
	}
}

This is a generic class to store, remove and retrieve objects of type T. As the next step you want to call this service from the DefaultProductService class as follows:

public class DefaultProductService : IProductService
{
	private SystemRuntimeCacheStorage _cacheStorage;

	public DefaultProductService()
	{
		_cacheStorage = new SystemRuntimeCacheStorage();
	}

	public Product GetProduct(int productId)
	{
		string key = "product|" + productId;
		Product p = _cacheStorage.Retrieve<Product>(key);
		if (p == null)
		{
        		p = new Product();
			_cacheStorage.Store(key, p);
		}
		return p;
	}
}

We’ve seen a similar example in the previous post where the consuming class constructs its own dependency. This “solution” has the same errors as the one above – it’s only the stacktrace that has changed. You’ll get the same faulty design with a factory as well. However, this was a step towards a loosely coupled solution.

Dependency injection

As you know by now abstractions are the way to go to reach loose coupling. We can factor out the caching logic into an interface:

public interface ICacheStorage
{
	void Remove(string key);
	void Store(string key, object data);
	void Store(string key, object data, DateTime absoluteExpiration, TimeSpan slidingExpiration);
	T Retrieve<T>(string key);
}

Then using constructor injection we can inject the caching mechanism as follows:

public class DefaultProductService : IProductService
{
	private readonly ICacheStorage _cacheStorage;

	public DefaultProductService(ICacheStorage cacheStorage)
	{
		_cacheStorage = cacheStorage;
	}

	public Product GetProduct(int productId)
	{
		string key = "product|" + productId;
		Product p = _cacheStorage.Retrieve<Product>(key);
		if (p == null)
		{
			p = new Product();
			_cacheStorage.Store(key, p);
		}
		return p;
	}
}

Now we can inject any type of concrete caching solution which implements the ICacheStorage interface. As far as tests are concerned you can inject an empty caching solution using the Null object pattern so that the test can concentrate on the true logic of the GetProduct method.

This is certainly a loosely coupled solution but you may need to inject similar interfaces to a potentially large number of services:

public ProductService(ICacheStorage cacheStorage, ILoggingService loggingService, IPerformanceService performanceService)
public CustomerService(ICacheStorage cacheStorage, ILoggingService loggingService, IPerformanceService performanceService)
public OrderService(ICacheStorage cacheStorage, ILoggingService loggingService, IPerformanceService performanceService)

These services will permeate your class structure. Also, you may create a base service for all services like this:

public ServiceBase(ICacheStorage cacheStorage, ILoggingService loggingService, IPerformanceService performanceService)

If all services must inherit this base class then they will start their lives with 3 abstract dependencies that they may not even need. Also, these dependencies don’t represent the true purpose of the services, they are only “sidekicks”.

Ambient context

For a discussion on this type of DI and when and why (not) to use it consult this post.

Interception using the Decorator pattern

The Decorator design pattern can be used as a do-it-yourself interception. The product service class can be reduced to its true purpose:

public class DefaultProductService : IProductService
	{		
		public Product GetProduct(int productId)
		{
			return new Product();
		}
	}

A cached product service might look as follows:

public class CachedProductService : IProductService
{
	private readonly IProductService _innerProductService;
	private readonly ICacheStorage _cacheStorage;

	public CachedProductService(IProductService innerProductService, ICacheStorage cacheStorage)
	{
		if (innerProductService == null) throw new ArgumentNullException("ProductService");
		if (cacheStorage == null) throw new ArgumentNullException("CacheStorage");
		_cacheStorage = cacheStorage;
		_innerProductService = innerProductService;
	}

	public Product GetProduct(int productId)
	{
		string key = "Product|" + productId;
		Product p = _cacheStorage.Retrieve<Product>(key);
		if (p == null)
		{
			p = _innerProductService.GetProduct(productId);
			_cacheStorage.Store(key, p);
		}

		return p;
	}
}

The cached product service itself implements IProductService and accepts another IProductService in its constructor. The injected product service will be used to retrieve the product in case the injected cache service cannot find it.

The consumer can actively use the cached implementation of the IProductService in place of the DefaultProductService class to deliberately call for caching. Here the call to retrieve a product is intercepted by caching. The cached service can concentrate on its task using the injected ICacheStorage object and delegates the actual product retrieval to the injected IProductService class.

You can imagine that it’s possible to write a logging decorator, a performance decorator etc., i.e. a decorator for any type of cross-cutting concern. You can even decorate the decorator to include logging AND caching. Here you see several applications of SOLID. You keep the product service clean so that it adheres to the Single Responsibility Principle. You extend its functionality through the cached product service decorator which is an application of the Open-Closed principle. And obviously injecting the dependencies through abstractions is an example of the Dependency Inversion Principle.

The Decorator is a well-tested pattern to implement interception in a highly flexible object-oriented way. You can implement a lot of decorators for different purposes and you will adhere to SOLID pretty well. However, imagine that in a large business application with hundreds of domains and hundreds of services you may potentially have to write hundreds of decorators. As each decorator executes one thing only to adhere to SRP you may need to implement 3-4 decorators for each service.

That’s a lot of code to write… This is actually a practical limitation of solely using this pattern in a large application to achieve interception: it’s extremely repetitive and time consuming.

Aspect oriented programming (AOP)

The idea behind AOP is strongly related to attributes in .NET. An example of an attribute in .NET is the following:

[PrincipalPermission(SecurityAction.Demand, Role = "Administrator")]
protected void Page_Load(object sender, EventArgs e)
{
}

This is also an example of interception. The PrincipalPermission attribute checks the role of the current principal before the decorated method can continue. In this case the ASP.NET page won’t load unless the principal has the Administrator role. I.e. the call to Page_Load is intercepted by this Security attribute.

The decorator pattern we saw above is an example of imperative coding. The attributes are an example of declarative interception. Applying attributes to declare aspects is a common technique in AOP. Imagine that instead of writing all those decorators by hand you could simply decorate your objects as follows:

[Cached]
[Logged]
[PerformanceChecked]
public class DefaultProductService : IProductService
{		
	public Product GetProduct(int productId)
	{
		return new Product();
	}
}

It looks attractive, right? Well, let’s see.

The PrincipalPermission attribute is special as it’s built into the .NET base class library (BCL) along with some other attributes. .NET understands this attribute and knows how to act upon it. However, there are no built-in attributes for caching and other cross-cutting concerns. So you’d need to implement your own aspects. That’s not too difficult; your custom attribute will need to derive from the System.Attribute base class. You can then decorate your classes with your custom attribute but .NET won’t understand how to act upon it. The code behind your implemented attribute won’t run just like that.

There are commercial products, like PostSharp, that enable you to write attributes that are acted upon. PostSharp carries out its job by modifying your code in the post-compilation step. The “normal” compilation runs first, e.g. by csc.exe and then PostSharp adds its post-compilation step by taking the code behind your custom attribute(s) and injecting it into the code compiled by csc.exe in the correct places.

This sounds enticing. At least it sounded to me like heaven when we tested AOP with PostSharp: we wanted to measure the execution time and save several values about the caller of some very important methods of a service. So we implemented our custom attributes and very extremely proud of ourselves. Well, until someone else on the team started using PostSharp in his own assembly. When I referenced his project in mine I suddenly kept getting these funny notices that I have to activate my PostSharp account… So what’s wrong with those aspects?

  • The code you write will be different from what will be executed as new code will be injected into the compiled one in the post-compilation step. This may be tricky in a debugging session
  • The vendors will be happy to provide helper tools for debugging which may or may not be included in the base price and push you towards an anti-pattern where you depend on certain external vendors – also a form of tight coupling
  • All attributes must have default parameterless constructors – it’s not easy to consume dependencies from within an attribute. Your best bet is using ambient context – or abandon DI and go with default implementations of the dependencies
  • It can be difficult to fine-grain the rules when to apply an aspect. You may want to go with a convention-based applicability such as “apply the aspect on all objects whose name ends with ‘_log'”
  • The aspect itself is not an abstraction; it’s not straightforward to inject different implementations of an aspect – therefore if you decide to go with the System.Runtime.Cache in your attribute implementation then you cannot change your mind afterwards. You cannot implement a factory or any other mechanism to inject a certain aspect in place of some abstract aspect as there’s no such thing

This last point is probably the most serious one. It pulls you towards the dreaded tight-coupling scenario where you cannot easily redistribute a class or a module due to the concrete dependency introduced by an aspect. If you consume such an external library, like in the example I gave you, then you’re stuck with one implementation – and you better make sure you have access to the correct credentials to use that unwanted dependency…

Dynamic interception with a DI container

We briefly mentioned DI containers, or IoC containers in this series. You may be familiar with some of them, such as StructureMap and CastleWindsor. I won’t get into any details regarding those tools. There are numerous tutorials available on the net to get you started. As you get more and more exposed to SOLID in your projects then eventually you’ll most likely become familiar with at least one of them.

Dynamic interception makes use of the ability of .NET to dynamically emit types. Some DI containers enable you to automate the generation of decorators to be emitted straight into a running process.

This approach is fully object-oriented and helps you avoid the shortcomings of AOP attributes listed above. You can register your own decorators with the IoC container, you don’t need to rely on a default one.

If you are new to DI containers then make sure you understand the basics before you go down the dynamic interception route. I won’t show you any code here on how to implement this technique as it depends on the type of IoC container of your choosing. The key steps as far as CastleWindsor is concerned are as follows:

  • Implement the IInterceptor interface for your decorator
  • Register the interceptor with the container
  • Activate the interceptor by implementing the IModelInterceptorsSelector interface – this is the step where you declare when and where the interceptors will be invoked
  • Register the class that implements the IModelInterceptorsSelector interface with the container

Carefully following these steps will ensure that you can implement dynamic interception without the need for attributes. Note that not all IoC containers come with the feature of dynamic interception.

Conclusions

In this mini-series on DI within the series about SOLID I hope to have explained the basics of the Dependency Inversion Principle. This last constituent of SOLID is probably the one that has caused the most controversy and misunderstanding of the 5. Ask 10 developers on the purposes of DIP and you’ll get 11 different answers. You may absolutely have come across ideas in these posts that you disagree with – feel free to comment in that case.

However, I think there are several myths and misunderstandings about DI and DIP that were successfully dismissed:

  • DI is the same as IoC containers: no, IoC containers can automate DI but you can by any means apply DI in your code without a tool like that
  • DI can be solved with factories: look at the post on DI anti-patterns and you’ll laugh at this idea
  • DI requires an IoC containers: see the first point, this is absolutely false
  • DI is only necessary if you want to enable unit testing: no, DI has several advantages as we saw, effective unit testing being only one of them
  • Interception is best done with AOP: no, see above
  • Using an IoC container will automatically result in DI: no, you have to prepare your code according to the DI patterns otherwise an IoC container will have nothing to inject

View the list of posts on Architecture and Patterns here.

SOLID design principles in .NET: the Dependency Inversion Principle Part 3, DI anti-patterns

In the previous post we discussed the various techniques how to implement Dependency Injection. Now it’s time to show how NOT to do DI. Therefore we’ll look at a couple of anti-patterns in this post.

The main source in the series on Dependency Injection is based on the work of Mark Seemann and his great book on DI.

Lack of DI

The most obvious DI anti-patterns is the total absence of it where the class controls it dependencies. It is the most common anti-pattern in DI to see code like this:

public class ProductService
{
	private ProductRepository _productRepository;

	public ProductService()
	{
		_productRepository = new ProductRepository();
	}
}

Here the consumer class, i.e. ProductService, creates an instance of the ProductRepository class with the ‘new’ keyword. Thereby it directly controls the lifetime of the dependency. There’s no attempt to introduce an abstraction and the client has no way of introducing another type – implementation – for the dependency.

.NET languages, and certainly other similar platforms as well, make this extremely easy for the programmer. There is no automatic SOLID checking in Visual Studio, and why would there be such a mechanism? We want to give as much freedom to the programmer as possible, so they can pick – or even mix – C#, VB, F#, C++ etc., write in an object-oriented way or follow some other style, so there’s a high degree of control given to them within the framework. So it feels natural to new up objects as much as we like: if I need something I’ll need to go and get it. If the ProductService needs a product repository then it will need to fetch one. So even experienced programmers who know SOLID and DI inside out can fall into this trap from time to time, simply because it’s easy and programming against abstractions means more work and a higher degree of complexity.

The first step towards salvation may be to declare the private field as an abstract type and mark it as readonly:

public class ProductService
{
	private readonly IProductRepository _productRepository;

	public ProductService()
	{
		_productRepository = new ProductRepository();
	}
}

However, not much is gained here yet, as at runtime _productRepository will always be a new ProductRepository.

A common, but very wrong way of trying to resolve the dependency is by using some Factory. Factories are an extremely popular pattern and they seem to be the solution to just about anything – besides copy/paste of course. I’m half expecting Bruce Willis to save the world in Die Hard 27 by applying the static factory pattern. So no wonder people are trying to solve DI with it too. You can see from the post on factories that they come in 3 forms: abstract, static and concrete.

The “solution” with the concrete factory may look like this:

public class ProductRepositoryFactory
{
	public ProductRepository Create()
	{
		return new ProductRepository();
	}
}
public ProductService()
{
	ProductRepositoryFactory factory = new ProductRepositoryFactory();
	_productRepository = factory.Create();
}

Great, we’re now depending directly on ProductRepositoryFactory and ProductRepository is still hard-coded within the factory. So instead of just one hard dependency we now have two, well done!

What about a static factory?

public class ProductRepositoryFactory
{
	public static ProductRepository Create()
	{
		return new ProductRepository();
	}
}
public ProductService()
{
	_productRepository = ProductRepositoryFactory.Create();
}

We’ve got rid of the ‘new’, yaaay! Well, no, if you recall from the introduction using static methods still creates a hard dependency, so ProductService still depends directly on ProductRepositoryFactory and indirectly on ProductRepository.

To make matters worse the static factory can be misused to produce a sense of freedom to the client to control the type of dependency as follows:

public class ProductRepositoryFactory
{
	public static IProductRepository Create(string repositoryTypeDescription)
	{
		switch (repositoryTypeDescription)
		{
			case "default":
				return new ProductRepository();
			case "test":
				return new TestProductRepository();
			default:
				throw new NotImplementedException();
		}
	}
}
public ProductService()
{
	_productRepository = ProductRepositoryFactory.Create("default");
}

Oh, brother, this is a real mess. ProductService still depends on the ProductRespositoryFactory class, so we haven’t eliminated that. It is now indirectly dependent on the two concrete repo types returned by factory. Also, we now have magic strings flying around. If we ever introduce a third type of repository then we’ll need to revisit the factory and inform all actors that there’s a new magic string. This model is very difficult to extend. The ability to configure in code, or possibly in a config file, gives a false sense of security to the developer.

You can create a mock Product repository for a unit test scenario, extend the switch-case statement in the factory and maybe introduce a new magic string “mock” only for testing purposes. Then you can put “mock” in the Create method, recompile and run your test just for unit testing. Then you forget to put it back to “sql” or whatever and deploy the solution… Realising this you may want to overload the ProductService constructor like this:

public ProductService(string productRepoDescription)
{
	_productRepository = ProductRepositoryFactory.Create(productRepoDescription);
}

That only moves the magic string problem up the stacktrace but does nothing to solve the dependency problems outlined above.

Let’s look at an abstract factory:

public interface IProductRepositoryFactory
{
	IProductRepository Create(string repoDescription);
}

ProductRepositoryFactory can implement this interface:

public class ProductRepositoryFactory : IProductRepositoryFactory
{
	public IProductRepository Create(string repositoryTypeDescription)
	{
		switch (repositoryTypeDescription)
		{
			case "default":
				return new ProductRepository();
			case "test":
				return new TestProductRepository();
			default:
				throw new NotImplementedException();
		}
	}
}

You can use it from ProductService as follows:

private IProductRepositoryFactory _productRepositoryFactory;
private IProductRepository _productRepository;

public ProductService(string productRepoDescription)
{
	_productRepositoryFactory = new ProductRepositoryFactory();
	_productRepository = _productRepositoryFactory.Create(productRepoDescription);
}

We need to new up a ProductRepositoryFactory, so we’re back at square one. However, abstract factory is still the least harmful of the factory patterns when trying to solve DI as we can refactor this code in the following way:

private readonly IProductRepository _productRepository;

public ProductService(string productRepoDescription, IProductRepositoryFactory productRepositoryFactory)
{
	_productRepository = productRepositoryFactory.Create(productRepoDescription);
}

This is not THAT bad. We can provide any type of factory now but we still need to provide a magic string. A way of getting rid of that string would be to create specialised methods within the factory as follows:

public interface IProductRepositoryFactory
{
	IProductRepository Create(string repoDescription);
	IProductRepository CreateTestRepository();
	IProductRepository CreateSqlRepository();
	IProductRepository CreateMongoDbRepository();
}

…with ProductService using it like this:

private readonly IProductRepository _productRepository;

public ProductService(IProductRepositoryFactory productRepositoryFactory)
{
	_productRepository = productRepositoryFactory.CreateMongoDbRepository();
}

This is starting to look like proper DI. We can inject our own version of the factory and then pick the method that returns the necessary repository type. However, this is a false positive impression again. The ProductService still controls the type of repository returned by the factory. If we wanted to test the SQL repo then we have to revisit the product service – and all other services that need a repository – and select CreateSqlRepository() instead. Same goes for unit testing. You can certainly create a mock repository factory but you’ll need to make sure that the mock implementation returns mock objects for all repository types the factory returns. That breaks ISP in SOLID.

No, even in the above case the caller cannot control type of IProductRepository used within ProductRepository. You can certainly control the concrete implementation of IProductRepositoryFactory, but that’s not enough.

Conclusion: factories are great for purposes other than DI. Use one of the strategies outlined in the previous post.

Lack of DI creates tightly coupled classes where one cannot exist without the other. You cannot redistribute the ProductService class without the concrete ProductRepository so it diminishes re-usability.

Overloaded constructors

Consider the following code:

private readonly IProductRepository _productRepository;

public ProductService() : this(new ProductRepository())
{}

public ProductService(IProductRepository productRepository)
{
        _productRepository = productRepository;
}

It’s great that we have can inject an IProductRepository but what’s the default constructor doing there? It simply calls the overloaded one with a concrete implementation of the repository interface. There you are, we’ve just introduced a completely unnecessary coupling. Matters get worse if the default implementation comes from an external source such as a factory seen above:

private readonly IProductRepository _productRepository;

public ProductService() : this(new ProductRepositoryFactory().Create("sql"))
{}

public ProductService(IProductRepository productRepository)
{
	_productRepository = productRepository;
}

By now you know why factories are not suitable for solving DI so I won’t repeat myself. Even if clients always call the overloaded constructor the class still cannot exist without either the ProductRepository or the ProductRepositoryFactory class.

The solution is easy: get rid of the default constructor and force clients to provide their own implementations.

Service locator

I’ve already written a post on this available here, so I won’t repeat the whole post. In short: a Service Locator resembles a proper IoC container such as StructureMap but it introduces easily avoidable couplings between objects.

Conclusion

We’ve discussed some of the ways how not to do DI. There may certainly be more of those but these are probably the most frequent ones. The most important variant to get rid of is the lack of DI which is exacerbated by the use of factories. That’s also the easiest to spot – look for the ‘new’ keyword in conjunction with dependencies.

The use of static method and properties can also be indicators of DIP violation:

DateTime.Now.ToString();
DataAccess.SaveCustomer(customer);
ProductRepositoryFactory.Create("sql");

We’ve seen especially in the case of factories that static methods and factories only place the dependencies one step down the execution ladder. Static methods are acceptable if they don’t themselves have concrete dependencies but only use the parameters already used by the object that’s calling the static method. However, if those static methods new up other dependencies which in turn may instantiate their own dependencies then that will quickly become a tightly coupled nightmare.

View the list of posts on Architecture and Patterns here.

Elliot Balynn's Blog

A directory of wonderful thoughts

Software Engineering

Web development

Disparate Opinions

Various tidbits

chsakell's Blog

WEB APPLICATION DEVELOPMENT TUTORIALS WITH OPEN-SOURCE PROJECTS

Once Upon a Camayoc

ARCHIVED: Bite-size insight on Cyber Security for the not too technical.