Design patterns and practices in .NET: the Decorator design pattern

Introduction

The Decorator pattern aims to extend the functionality of objects at runtime. It achieves this goal by wrapping the object in a decorator class leaving the original object intact. The result is an enhanced implementation of the object which does not break any client code that’s using the object.

It is an alternative solution to subclassing: you can always create subclasses to an object to extend and modify its functionality but that can result in class-explosion. Class-explosion means that you create a large number of concrete classes from a base class to allow for the combination of different possible behaviour types. The result is a bloated class structure and a hard-to-maintain application.

The pattern supports the Open/Closed principle of SOLID software design: you don’t change the implementation of an existing object. Instead, you modify the object using a wrapper decorator class. All this happens dynamically at runtime, i.e. this is not a design time implementation. This yields another benefit: flexible design. Our application will be flexible enough to take on new functionality to meet changing requirements.

When is this pattern most applicable?

  • Legacy systems: changing legacy code can be nasty. It’s easier to modify existing code with decorators
  • Add functionality to controls in Windows Forms / WPF
  • Sealed classes: the pattern lets you change these classes despite they’re sealed

The key to understanding this pattern is wrapping: the decorator class behaves just like the object you’re trying to change by behaving like a wrapper of that object. The decorator will therefore have a reference to that object in its class structure. The pattern is also called the Wrapper pattern for exactly this reason. You can even have a chain of decorators by wrapping the decorator of the decorator of the decorator etc.

Demo with a Base Class

Fire up Visual Studio 2010 or 2012 and create a new Console application with the name you want. We’ll simulate a travel agency that offers 3 specific types of vacations: Recreation, Beach, Activity. Insert a base abstract class called Vacation:

public abstract class Vacation
	{
		public abstract string Description { get; }
		public abstract int Price { get; }
	}

Insert the following concrete classes:

public class Beach : Vacation
	{
		public override string Description
		{
			get { return "Beach"; }
		}

		public override int Price
		{
			get
			{
				return 2;
			}
		}
	}
public class Activity : Vacation
	{
		public override string Description
		{
			get { return "Activity"; }
		}

		public override int Price
		{
			get { return 4; }
		}
	}
public class Recreation : Vacation
	{
		public override string Description
		{
			get { return "Recreation"; }
		}

		public override int Price
		{
			get { return 3; }
		}
	}

The Main method in Program.cs is very simple:

static void Main(string[] args)
		{
			Beach beach = new Beach();

			Console.WriteLine(beach.Description);
			Console.WriteLine("Price: {0}", beach.Price);
			Console.ReadKey();
		}

I believe all is clear so far. Run the programme to see the output.

As time goes by customers want to add extras to their vacations: private pool, all-inclusive, massage etc. All of these can be added to each vacation type. So you start creating concrete classes such as BeachWithPool, BeachWithPoolAndMassage, ActivityWithPoolAndMassageAndAllInclusive, right?

Well, no, not really. This would be result in the aforementioned class-explosion. The amount of concrete classes in the application would grow exponentially as we add more and more extras and vacation types. This clearly cannot be maintained. Instead we’ll use the decorator pattern to decorate the concrete types with the extras.

Go ahead and insert the following decorator class:

public class VacationDecorator : Vacation
	{
		private Vacation _vacation;

		public VacationDecorator(Vacation vacation)
		{
			_vacation = vacation;
		}

		public Vacation Vacation
		{
			get { return _vacation; }
		}

		public override string Description
		{
			get { return _vacation.Description; }
		}

		public override int Price
		{
			get { return _vacation.Price; }
		}
	}

Note that the decorator also derives from the Vacation abstract class. The decorator delegates the overridden implemented getters to its wrapped Vacation object. The wrapped Vacation object is the one that’s going to be decorated. This will be the base class for the concrete decorators. So let’s start with the PrivatePool decorator:

public class PrivatePool : VacationDecorator
	{
		public PrivatePool(Vacation vacation) : base(vacation){}

		public override string Description
		{
			get
			{
				return string.Concat(base.Vacation.Description, ", Private pool.");
			}
		}

		public override int Price
		{
			get
			{
				return base.Vacation.Price + 2;
			}
		}
	}

This concrete decorator derives from the VacationDecorator class. It overrides the Description and Price property getters by extending those of the wrapped Vacation object. Let’s try to add a private pool to our beach holiday in Program.cs:

static void Main(string[] args)
		{
			Vacation beach = new Beach();
			beach = new PrivatePool(beach);

			Console.WriteLine(beach.Description);
			Console.WriteLine("Price: {0}", beach.Price);
			Console.ReadKey();
		}

Note that we first create a new Beach object, but declare its type as Vacation. Then we take the PrivatePool decorator and use it as a wrapper for the Beach. The rest of the Main method has remained unchanged. Run the programme and you’ll see that the description and total price include the Beach and PrivatePool descriptions and prices. Note that when we wrap the Beach object in the PrivatePool decorator it becomes of type PrivatePool instead of the original Beach.

Let’s create two more decorators:

public class Massage : VacationDecorator
	{
		public Massage(Vacation vacation) : base(vacation){}

		public override string Description
		{
			get
			{
				return string.Concat(base.Vacation.Description, ", Massage.");
			}
		}

		public override int Price
		{
			get
			{
				return base.Vacation.Price + 1;
			}
		}
	}
public class AllInclusive : VacationDecorator
	{
		public AllInclusive(Vacation vacation) : base(vacation)
		{}

		public override string Description
		{
			get
			{
				return string.Concat(base.Vacation.Description, ", All inclusive.");
			}
		}

		public override int Price
		{
			get
			{
				return base.Vacation.Price + 3;
			}
		}
	}

These two decorators have the same structure as PrivatePool, but the description and price getters have been modified of course. We’re now ready to add the all-inclusive and massage extras to our Beach vacation:

static void Main(string[] args)
		{
			Vacation beach = new Beach();
			beach = new PrivatePool(beach);
			beach = new AllInclusive(beach);
			beach = new Massage(beach);

			Console.WriteLine(beach.Description);
			Console.WriteLine("Price: {0}", beach.Price);
			Console.ReadKey();
		}

We keep wrapping the decorators until the original Beach object has been wrapped in a PrivatePool, an AllInclusive and a Massage decorator. The final type is Massage. The call to the Description and Price property getters will simply keep going “upwards” in the decorator chain. Run the programme and you’ll see that the description and price getters include all 4 inputs.

Now if the travel agency wants to introduce a new type of extra, all they need to do is to build a new concrete class deriving from the VacationDecorator.

The original Beach class is completely unaware of the presence of decorators. In addition, we can mix and match vacations types with their decorators. Our class structure is easy to extend and maintain.

Demo with an interface

You can apply the decorator pattern to interface-type abstractions as well. As you now understand the components of the pattern let’s go through one quick example.

Insert a class called Product and an interface called IProductService. Leave the product class empty. Insert a method into the interface:

public interface IProductService
	{
		IEnumerable<Product> GetProducts();
	}

Add a class called ProductService that implements IProductService:

public class ProductService : IProductService
	{
		public IEnumerable<Product> GetProducts()
		{
			return new List<Product>();
		}
	}

Using poor man’s dependency injection we can use the components as follows:

IProductService productService = new ProductService();
IEnumerable<Product> products = productService.GetProducts();
Console.ReadKey();

Nothing awfully complicated there I hope. Let’s say that you want to add caching to the Product service so that you don’t need to ask the product repository for the list of products every time. One option is of course to add caching directly into the body of the implemented GetProducts method. However, that is not the most optimal decision: it goes against the Single Responsibility Principle and reduces meaningful testability. It’s not straightforward to unit test a method that caches the results in its body. When you run the unit test twice, then you may not be able to get a reliable result from the method as it simply returns the cached outcome. Injecting some kind of caching strategy is a good way to go, check out the related Adapter pattern for more.

However, here we’ll take another approach. We want to extend the functionality of the Product service with a decorator. So let’s follow the same path as we did in the base class demo. Insert a decorator base as follows:

public class ProductServiceDecorator : IProductService
	{
		private readonly IProductService _productService;

		public ProductServiceDecorator(IProductService productService)
		{
			_productService = productService;
		}

		public IProductService ProductService { get { return _productService; } }

		public virtual IEnumerable<Product> GetProducts()
		{
			return _productService.GetProducts();
		}
	}

We follow the same principle as with the VacationDecorator class: we wrap an IProductService and call upon its implementation of GetProducts(). Let’s add the following cache decorator – you’ll need to add a reference to System.Runtime.Caching in order to have access to the ObjectCache class:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Caching;
using System.Text;
using System.Threading.Tasks;

namespace Decorator
{
	public class ProductServiceCacheDecorator : ProductServiceDecorator
	{

		public ProductServiceCacheDecorator(IProductService productService)
			: base(productService)
		{ }

		public override IEnumerable<Product> GetProducts()
		{
			ObjectCache cache = MemoryCache.Default;
			string key = "products";
			if (cache.Contains(key))
			{
				return (IEnumerable<Product>)cache[key];
			}
			else
			{
				IEnumerable<Product> products = base.ProductService.GetProducts();
				CacheItemPolicy policy = new CacheItemPolicy();
				policy.AbsoluteExpiration = new DateTimeOffset(DateTime.Now.AddMinutes(1));
				cache.Add(key, products, policy);
				return products;
			}
		}
	}
}

We check the cache for the presence of a key. If it’s there then retrieve its contents otherwise ask the IProductRepository of the decorator base to carry out the query to the repository and cache the results. You can use these components as follows:

IProductService productService = new ProductService();
productService = new ProductServiceCacheDecorator(productService);
IEnumerable<Product> products = productService.GetProducts();
Console.ReadKey();

Run the programme and you’ll see that ProductServiceCacheDecorator takes over, runs the caching strategy and retrieves the products from the wrapped IProductService object.

View the list of posts on Architecture and Patterns here.

Continuation tasks in .NET TPL: many tasks continued by a single task

Tasks in .NET TPL make it easy to assign tasks that should run upon the completion of a certain task.

We’ll need a basic object with a single property:

public class Series
{
	public int CurrentValue
	{
		get;
		set;
	}
}

In previous installments of the posts on TPL we saw how to create tasks that are scheduled to run after a single task has completed: check here and here. It is also possible to create many tasks and a single continuation task that runs after all the antecedents have completed. The ContinueWhenAll() method takes an array or Task objects, all of which have to complete before the continuation task can proceed.

We create 10 tasks each of which increases the its local series value by 1000 in total in a loop:

Series series = new Series();

Task<int>[] motherTasks = new Task<int>[10];

for (int i = 0; i < 10; i++)
{
	motherTasks[i] = Task.Factory.StartNew<int>((stateObject) =>
	{
		int seriesValue = (int)stateObject;
		for (int j = 0; j < 1000; j++)
		{
			seriesValue++;
		}
		return seriesValue;
	}, series.CurrentValue);
}

All 10 tasks are antecedent tasks in the chain. Let’s declare the continuation task:

Task continuation = Task.Factory.ContinueWhenAll<int>(motherTasks, antecedents =>
{
	foreach (Task<int> task in antecedents)
	{
		series.CurrentValue += task.Result;
	}
});

continuation.Wait();
Console.WriteLine(series.CurrentValue);

We extract the series value from each antecedent task and increase the overall series value by the individual task results, i.e. a 1000. The lambda expression looks awkward a bit: the array of antecedent tasks appears twice. Once as the first argument to the ContinueWhenAll method and then as an input to the lambda expression.

There’s one more thing to note about the syntax: ContinueWhenAll of T denotes the return type of the antecedent tasks. You can also specify the return type of the continuation. Here’s a list of possibilities:

The continuation and the antecedent tasks both return an int:

Task<int> continuation = Task.Factory.ContinueWhenAll<int>(motherTasks, antecedents =>
{
	foreach (Task<int> task in antecedents)
	{
		series.CurrentValue += task.Result;
	}
	return 1234;
});

Continuation is void and antecedent tasks return an int:

Task continuation = Task.Factory.ContinueWhenAll<int>(motherTasks, antecedents =>
{
	foreach (Task<int> task in antecedents)
	{
		series.CurrentValue += task.Result;
	}
});

All of them are void:

Task continuation = Task.Factory.ContinueWhenAll(motherTasks, antecedents =>
{
	foreach (Task<int> task in antecedents)
	{
		//do something
	}
});

Continuation returns an int and antecedent tasks are void:

Task<int> continuation = Task<int>.Factory.ContinueWhenAll(motherTasks, antecedents =>
{
	foreach (Task<int> task in antecedents)
	{
		//do something
	}
	return 1234;
});

Note that it’s enough to start the antecedent tasks, which we did with Task.Factory.StartNew. The continuation task will be scheduled automatically.

View the list of posts on the Task Parallel Library here.

Continuation tasks in .NET TPL: continuation with many tasks

Tasks in .NET TPL make it easy to assign tasks that should run upon the completion of a certain task.

We’ll need a basic object with a single property:

public class Series
{
	public int CurrentValue
	{
		get;
		set;
	}
}

Declare and start a task that increases the CurrentValue in a loop and return the Series. This task is called the antecedent task in a chain of tasks.

Task<Series> motherTask = Task.Factory.StartNew<Series>(() =>
{
	Series series = new Series();
	for (int i = 0; i < 5000; i++)
	{
		series.CurrentValue++;
	}
	return series;
});

So this simple task returns a Series object with a current value property.

You can create other tasks that will continue upon the completion of the antecedent task. You can actually specify a condition when the continuation task should start. The default value is that the continuation task will be scheduled to start when the preceding task has completed. See the next post on TPL for examples.

You can combine continuations in different ways. You can have a linear chain:

  1. First task completes
  2. Second task follows first task and completes
  3. Third task follows second task and completes
  4. etc. with task #4, #5…

…or

  1. First task completes
  2. Second and thirds tasks follow first task and they all run
  3. Fourth task starts after second task and completes
  4. etc. with task #6, #7…

So you can create a whole tree of continuations as you wish. You can build up each continuation task one by one. Let’s declare the first continuation task:

Task<int> firstContinuationTask
	= motherTask.ContinueWith<int>((Task<Series> antecedent) =>
	{
		Console.WriteLine("First continuation task reporting series value: {0}", antecedent.Result.CurrentValue);
		return antecedent.Result.CurrentValue * 2;
	});

The first continuation task reads the Series object built by the antecedent, the “mother” task. It prints out the CurrentValue of the Series. The continuation task also returns an integer: the series value multiplied by two.

We’ll let a second continuation task print out the integer returned by the first continuation task. I.e. from the point view of the second continuation task the first continuation task is the antecedent. We can say that “motherTask” is the “grandma” 🙂

Task secondContinuationTask
	= firstContinuationTask.ContinueWith((Task<int> antecedent) =>
	{
		Console.WriteLine("Second continuation task reporting series value: {0}", antecedent.Result);
	});

Note that it’s enough to start the first task, which we did with Task.Factory.StartNew. The continuation tasks will be scheduled automatically.

Alternatively you can declare the continuation tasks in a chained manner:

motherTask.ContinueWith<int>((Task<Series> antecedent) =>
{
	Console.WriteLine("First continuation task reporting series value: {0}", antecedent.Result.CurrentValue);
	return antecedent.Result.CurrentValue * 2;
}).ContinueWith((Task<int> antecedent) =>
{
	Console.WriteLine("Second continuation task reporting series value: {0}", antecedent.Result);
});

View the list of posts on the Task Parallel Library here.

Thread safe collections in .NET: ConcurrentBag

Concurrent collections in .NET work very much like their single-thread counterparts with the difference that they are thread safe. These collections can be used in scenarios where you need to share a collection between Tasks. They are typed and use a lightweight synchronisation mechanism to ensure that they are safe and fast to use in parallel programming.

Concurrent bags

Concurrent bags are similar to concurrent stacks and concurrent queues but there’s a key difference. Bags are unordered collections. This means that the order of the items is not the same as how they were inserted. So concurrent bags are ideal if you would like to share a List of T generic collection among several tasks.

Important methods:

  • Add(T element): adds an item of type T to the collection
  • TryPeek(out T): tries to retrieve the next element from the collection without removing it. The value is set to the out parameter if the method succeeds. Otherwise it returns false.
  • TryTake(out T): tries to get the first element. It removes the item from the collection and sets the out parameter to the retrieved element. Otherwise the method returns false

The ‘try’ bit in the method names imply that your code needs to prepare for the event where the element could not be retrieved. If multiple threads retrieve elements from the same list you cannot be sure what’s in there when a specific thread tries to read from it.

Example

The example is almost identical to what we saw for the collections discussed previously in the posts on TPL.

Declare and fill a concurrent bag:

ConcurrentBag<int> concurrentBag = new ConcurrentBag<int>();

for (int i = 0; i < 5000; i++)
{
	concurrentBag.Add(i);
}

Next we’ll try to take every item from the bag. The bag will be accessed by several tasks at the same time. The counter variable – which is also shared and synchronised- will be used to check if all items have been retrieved.

int counter = 0;

Task[] bagTasks = new Task[20];
for (int i = 0; i < bagTasks.Length; i++)
{
	bagTasks[i] = Task.Factory.StartNew(() =>
	{
		while (concurrentBag.Count > 0)
		{
			int bagElement;
			bool success = concurrentBag.TryTake(out bagElement);
			if (success)
			{
				Interlocked.Increment(ref counter);
			}
		}
	});
}

The while loop will ensure that we’ll try to take the items as long as there’s something left in the collection.

Wait for the tasks and print the number of items processed – the counter should have the same value as the number of items in the bag:

Task.WaitAll(bagTasks);
Console.WriteLine("Counter: {0}", counter);

View the list of posts on the Task Parallel Library here.

Share data between Tasks using locks in .NET C#

You need to be careful when sharing mutable data across several tasks. If you don’t restrict the access to a shared variable then the tasks will all modify it in an unpredictable way. It’s because you cannot be sure how the tasks will be started and managed by the operating system: it depends on a range of parameters, such as the CPU usage, available memory, etc. If each task is supposed to modify a variable then maybe the first task won’t finish its work before the second task reads the same variable.

Consider the following:

class BankAccount
{
	public int Balance { get; set; }
}

The following code builds a list of tasks that each increase the balance by 1:

BankAccount account = new BankAccount();
List<Task> tasks = new List<Task>();

for (int i = 0; i < 5; i++)
{
	tasks.Add(Task.Factory.StartNew(() =>
	{
		for (int j = 0; j < 1000; j++)
		{
			account.Balance = account.Balance + 1;
		}
	}));
}
Task.WaitAll(tasks.ToArray());
Console.WriteLine(string.Concat("Expected value 5000", ", actual value ",account.Balance));

We expect the final balance to be 5000. Run the code and you will most likely get something less, such as 3856 or 4652. At times you get exactly 5000 but it’s unpredictable.

In such cases you need to identify the volatile critical regions in your code. In this example it’s simple:

account.Balance = account.Balance + 1;

It is the account balance that is modified by each thread so we need to lock it for access so that only one thread can modify it at a certain time. You can take the word “lock” almost literally as we use the lock keyword to do that. We take an object which will act as the synchronisation primitive. The lock object must be visible to all tasks:

object lockObject = new object();

You use the lock keyword as follows:

for (int i = 0; i < 5; i++)
{
	tasks.Add(Task.Factory.StartNew(() =>
	{
		for (int j = 0; j < 1000; j++)
		{
			lock (lockObject)
			{
				account.Balance = account.Balance + 1;
			}
		}
	}));
}

The volatile region of the code is locked using the lock object. The lock is released as soon as the current task has finished working with it.

Run the modified code and you should get the correct result.

View the list of posts on the Task Parallel Library here.

Filter an array of integers with LINQ C# using a lambda expression

Consider the following array of integers:

int[] nums = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };

Say you want to filter out the odd numbers. One way to achieve this is using LINQ with a lambda expression. You’ll need a delegate that returns a boolean:

public delegate bool IntegerFilter(int i);

The following function accepts an integer array and a delegate and returns the filtered array:

public int[] FilterIntegerArray(int[] ints, IntegerFilter filter)
    {
      List<int> intList = new List<int>;
      foreach (int i in ints)
      {
        if (filter(i))
        {
          intList.Add(i);
        }
      }
      return intList.ToArray();
    }

You can call this method as follows using lambda expression that matches the signature of the delegate:

int[] oddNumbers =
        FilterIntegerArray(nums, i => ((i & 1) == 1) });

You can view all LINQ-related posts on this blog here.

Simple Func delegate example with LINQ in .NET C#

Say you’d like to filter an array of integers:

int[] ints = new int[] { 1, 2, 3, 4, 5, 6, 7, 8 };

You’re only interested in integers greater than 3. There are many ways to filter an array of T. One of them is using a Func delegate. Func delegates come in the following forms:

public delegate TR Func<TR>();
public delegate TR Func<Arg0, TR>(Arg0 a0);
public delegate TR Func<Arg0, Arg1, TR>(Arg0 a0, Arg1 a1);
.
.
.
public delegate TR Func<Arg0, Arg1, Arg2, Arg3, TR>(Arg0 a0, Arg1 a1, Arg2 a2, Arg3 a3);

…where TR is the return type of function and Arg(x) is the input parameter type. Note that the return type is the last element of the parameter types.

One of the overloaded versions of the LINQ Where operator accepts a Func delegate:

IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<int, bool> predicate);

So we declare our Func:

Func<int, bool> GreaterThanTwo = i => i > 3;

Then we declare our query:

IEnumerable<int> intsGreaterThanTwo = ints.Where(GreaterThanTwo);

We finally enumerate the query:

foreach (int i in intsGreaterThanTwo)
{
        Console.WriteLine(i);
}

You can view all LINQ-related posts on this blog here.

Introduction to OAuth2: Json Web Tokens

Introduction

JSON web tokens are a sort of security token. As such, it is used for authentication purposes, and has similar attributes like the XLM-formatted SAML tokens we met in the series on Claims Bases Authentication. It shows the issuer of the token, the claims about the user, it must be signed to make it tamper-proof and it can have an expiration date. If you then log on to a web page then the authentication server will send back a security token that contains the data mentioned above. Upon successful authentication the web site will consume the token.

Again, like in the case of the SAML tokens there must be a trust relationship between the consumer and the issuer of the token. This ensures that even the contents of the token are trusted. The consumer knows about the key that the issuer uses to sign the token. That key can be used for validation purposes. At the time of writing this post SAML tokens are the most commonly used security tokens. SAML has quite a complex structure with a strict schema involved. It is very expressive with various encryption and signature options.

This last point is actually a limitation in the world of mobile devices. It’s needless to say how widespread mobile devices are in the world. There are probably more tablets and smartphones available nowadays than laptops and stationary computers. It feels like it’s only programmers like me who need computers… Mobile devices must perform their tasks with resources that are limited in comparison to computers. They are not well suited for parsing complex XML tokens.

This is where the JSON web tokens enter the scene with their simplified structure. The data is JSON which is more compact than XML and even mobile devices have the ability to parse them. JSON is also native to JavaScript which has grown into a very important language with applications such as Node.js, Windows 8 store apps, MongoDb etc. There are still a number of symmetric and asymmetric encryption options available for the token content and signature but it’s limited compared to what the SAML standard has to offer. However, the most widely accepted algorithms, such as RSA and AES are available and they will suit the vast majority of needs.

With the fast advance of mobile devices JSON web tokens will soon become – or already have? – the new standard for security tokens. They are already the de facto standard for OAuth2 and OpenID Connect in the modern mobile implementations.

Structure

A JSON security token consists of two parts:

  • Header with some metadata, e.g. on the algorithms and keys used to encrypt and sign the message
  • Claims

If you don’t know what claims are then make sure you understand the basics: start here.

These claims can be token specific, such as Issuer, Expiration date, or they can be defined by the application. You as the programmer are free to define what goes into the application-specific claims, such as FirstName, Email, NameOfPet, FavouriteColour, you name it.

If you’ve read through the series on Claims Based Auth available on this blog then the token specific claims are probably familiar to you:

  • Issuer: the identifier of the issuer of the token so that the recipient knows where the token is coming from
  • Audience: shows who the token is targeted at so that the recipient doesn’t use a token meant for a different application or website
  • IssuedAt: when the token was issued
  • Expiration: the expiration date of the token
  • Subject: typically a user ID that describes who the token is meant for

To keep the token size limited these claim types are abbreviated:

  • Issuer: iss
  • Audience: aud
  • IssuedAt: iat
  • Expiration: exp
  • Subject: sub

This is what a header can look like:

{
"typ": "JWT"
, "alg": "HS256"
}

Type: JWT means of course Json Web Token. Alg shows that the HMACSHA256 algorithm was used to sign the token.

The claims section can look like this:

{
"iss": "issuer name, usually some URL http://www.myauthenticationserver.com&quot;,
"exp": "a date in UNIX date format i.e. the number of seconds since 1970/01/01",
"aud": "the URL of the consumer likehttp: //www.mygreatsite.com",
"sub": "user identifier such as bobthebuilder"
}

Other claim types can be useful in the claims section: e.g. “client” to identify the application that requested the token. or “scope” to show the list of allowed operations for authorisation purposes:

{
"scope": ["read", "search"]
}

The list of claims can be extended as you wish, just as we saw in the case of SAML claims based authentication.

To make the token even more compact, the different sections are base64 encoded:

sdfgsdfgdfg.hrtg34twefwf4fg5g45gg.wsefefg345e4gf5g

These are just random characters, but locate the 2 periods in this string. They are delimiters for:

  1. Header
  2. Claims
  3. Signature

…where each section is individually base64 encoded.

Some code

Open Visual Studio 2012 or higher, create a console application and add the below package from NuGet:

JWT package from NuGet

In addition, add a reference to the System.IdentityModel library.

There’s a number of ways to exchange JWT tokens between a sender and a receiver. In the below example I’ll use an RSACryptoServiceProvider to sign the JWT so that the receiver can validate it. If you don’t what RSA and asymmetric encryption mean then make sure to read upon it in the blog post mentioned above.

In short: the sender, i.e. the issuer of the token will have a pair of asymmetric encryption keys. The public key can be distributed to the receivers. The receiver will use the public key to validate the signature of the JWT token.

We won’t build a separate sender and receiver, that’s not the point here, but we want to simulate that the sender has access to both the private and public keys and the receiver only has the public key.

The following method will construct a valid RSA key pair:

private static RsaKeyGenerationResult GenerateRsaKeys()
{
	RSACryptoServiceProvider myRSA = new RSACryptoServiceProvider(2048);
	RSAParameters publicKey = myRSA.ExportParameters(true);
	RsaKeyGenerationResult result = new RsaKeyGenerationResult();
	result.PublicAndPrivateKey = myRSA.ToXmlString(true);
	result.PublicKeyOnly = myRSA.ToXmlString(false);
	return result;
}

…where RsaKeyGenerationResult is a DTO:

public class RsaKeyGenerationResult
{
	public string PublicKeyOnly { get; set; }
	public string PublicAndPrivateKey { get; set; }
}

In GenerateRsaKeys() we generate an RSA key and save its full set of keys and the public key only in two separate parameters of the RsaKeyGenerationResult object.

This is how to build and serialise the token:

RSACryptoServiceProvider publicAndPrivate = new RSACryptoServiceProvider();			
RsaKeyGenerationResult keyGenerationResult = GenerateRsaKeys();

publicAndPrivate.FromXmlString(keyGenerationResult.PublicAndPrivateKey);
JwtSecurityToken jwtToken = new JwtSecurityToken
	(issuer: "http://issuer.com", audience: "http://mysite.com"
	, claims: new List<Claim>() { new Claim(ClaimTypes.Name, "Andras Nemes") }
	, lifetime: new Lifetime(DateTime.UtcNow, DateTime.UtcNow.AddHours(1))
	, signingCredentials: new SigningCredentials(new RsaSecurityKey(publicAndPrivate)
		, SecurityAlgorithms.RsaSha256Signature, SecurityAlgorithms.Sha256Digest));

JwtSecurityTokenHandler tokenHandler = new JwtSecurityTokenHandler();
string tokenString = tokenHandler.WriteToken(jwtToken);

Console.WriteLine("Token string: {0}", tokenString);

You’ll recognise the parameters in the JwtSecurityToken constructor. You’ll see that we used the full RSA key to sign the token. The SecurityAlgorithms string enumeration stores the fully qualified names of the signature and digest algorithm. Those are compulsory arguments in the constructor.

Run the code up to this point and you should see a long string with the 3 segments mentioned above.

If your organisation has a valid X509 certificate then it can be used in place of the RsaSecurityKey object. Check out the X509AsymmetricSecurityKey object which accepts a 509Certificate2 object in its constructor. You’ll need to export the certificate with both the private and public keys and save it as a .pfx file. You can do that using the certmgr snap-in in Windows. Use the full file name of the .pfx file in the constructor of the X509Certificate2 constructor.

You can transmit the token in a number of different ways: in the header of an HTTP request, in the query string, in a cookie, etc. It’s now a string that you can send to the consumer.

The consumer can validate and read the claims from the token as follows:

JwtSecurityToken tokenReceived = new JwtSecurityToken(tokenString);

RSACryptoServiceProvider publicOnly = new RSACryptoServiceProvider();
publicOnly.FromXmlString(keyGenerationResult.PublicKeyOnly);
TokenValidationParameters validationParameters = new TokenValidationParameters()			
{
	ValidIssuer = "http://issuer.com"
	,AllowedAudience = "http://mysite.com"
	, SigningToken = new RsaSecurityToken(publicOnly)
};
			
JwtSecurityTokenHandler recipientTokenHandler = new JwtSecurityTokenHandler();
ClaimsPrincipal claimsPrincipal = recipientTokenHandler.ValidateToken(tokenReceived, validationParameters);

The client receives the base64 string which can be used in the constructor of the JwtSecurityToken object. We then use the public-key-only version of the RSACryptoServiceProvider to simulate that the receiver only has access to the public key of the sender. The TokenValidationParameters object can be used to build the validation logic. The JwtSecurityTokenHandler object is then used to validate the token. Upon successful validation you’ll get the ClaimsPrincipal which you’ll recognise from the posts on claims based auth mentioned above.

Set a breakpoint after the last line of code and inspect the contents of the claimsPrincipal object by hovering over it in VS.

The encoded JWT token can be decoded on this web page:

Jwt decoder page

Read the next part in this series here.

You can view the list of posts on Security and Cryptography here.

Getting a return value from a Task with C#

Sometimes you want to get a return value from a Task as opposed to letting it run in the background and forgetting about it. You’ll need to specify the return type as a type parameter to the Task object: a Task of T.

.NET 4.0

Without specifying an input parameter:

Task<int> task = new Task<int>(() => 
{
    int total = 0;
    for (int i = 0; i < 500; i++)
    {
        total += i;
    }
    return total;
});

task.Start();
int result = Convert.ToInt32(task.Result);

We count to 500 and return the sum. The return value of the Task can be retrieved using the Result property which can be converted to the desired type.

You can provide an input parameter as well:

Task<int> task = new Task<int>(obj => 
{
    int total = 0;
    int max = (int)obj;
    for (int i = 0; i < max; i++)
    {
        total += i;
    }
    return total;
}, 300);

task.Start();
int result = Convert.ToInt32(task.Result);

We specify that we want to count to 300.

.NET 4.5

The recommended way in .NET 4.5 is to use Task.FromResult, Task.Run or Task.Factory.StartNew:

FromResult:

public async Task DoWork()
{
       int res = await Task.FromResult<int>(GetSum(4, 5));	
}

private int GetSum(int a, int b)
{
	return a + b;
}

Please check out Stefan’s comments on the usage of FromResult in the comments section below the post.

Task.Run:

public async Task DoWork()
{
	Func<int> function = new Func<int>(() => GetSum(4, 5));
	int res = await Task.Run<int>(function);
}

private int GetSum(int a, int b)
{
	return a + b;
}

Task.Factory.StartNew:

public async Task DoWork()
{
	Func<int> function = new Func<int>(() => GetSum(4, 5));
	int res = await Task.Factory.StartNew<int>(function);			
}

private int GetSum(int a, int b)
{
	return a + b;
}

View the list of posts on the Task Parallel Library here.

.NET Developers’ user guide for troubleshooting networking problems Part 3

This is the last part in the series on basic networking for developers. Let’s look at firewalls first.

Firewalls

Firewalls are a common cause of port connectivity problems. What does a firewall do anyway? A firewall determines which connections are allowed to go through to the operating system and which ones are not. The firewall has a set of rules that state what traffic is allowed through. In the below example port 80 is let in but not port 25:

Firewall stop

Open Windows firewall as follows:

Open Windows firewall

This opens the Windows firewall manager:

Windows firewall managera

You’ll see a ling for Windows Firewall Properties somewhere in the middle of the screen:

Open windows firewall properties

Check the tabs in that window: you’ll see that you can set different options for the domain, private and public profiles which represent different states of Windows. It’s recommended to have the same settings for all 3 profiles unless you want to have different rules for your enterprise and home network. In this window you can turn the firewall on and off where the default is on.

Also by default we block all inbound connections and let all outbound traffic out. So traffic coming into our machine is blocked. You can also set the logging properties:

Open firewall logging settings

By default no dropped or successful connections are logged. If you suspect that your firewall drops data packets coming to your machine then it can be useful to log such events, so change that drop down list to yes. You can also specify the log file on the top of the window.

On the main Firewall screen you’ll see a link to the Inbound rules on the left hand side:

Open firewall inbound rules

You can add new inbound rules using the New Rule link:

Open firewall new inbound rule

You can create a rule by program, port, a predefined set of rules or a custom rule. For a program rule you specify an executable:

Specify Executable For Firewall Inbound Rule

This way we can open up or block the ports a specific program is listening on. Select some executable and click next. In the next screen you can select to open up the ports used by that executable or block them:

Inbound connection either blocked or allowed

Normally you’ll select the Allow option as all inbound traffic is blocked by default anyway. Click next and here you can define which profile to apply the rule to:

Which Windows profile to apply the rule to

As we said normally you’ll apply the same rules to all profiles. Then in the last step of the process you can provide a name for this rule. Give it some name and click finish. The new rule should appear in the list of rules on the main firewall screen.

This way of setting up a rule is useful if you’re not sure which port(s) a process uses. You can instead declare that all ports be opened up that are in use by that application.

Let’s create another rule, this time a port rule. Click the New Rule… link again and select the Port radio button and click next. On the next screen you’ll be able to define the type, i.e. TCP or UDP:

Inbound rule type tcp or udp

You can also define which port to open or close: all ports or just one specific or a range of ports. Let’s specify ’80, 443′ in the Specific local ports text box which will allow HTTP(S) traffic. Click Next and this screen will be familiar: you can allow or block the connection. Click Next. Again, the window will be familiar, you can define in which Windows profiles the rule will apply. In the last screen you can give a name to the rule, just like before. You’ll typically set this rule on your web server. If you don’t open up port 80 on your web server then no-one will be able to access the contents of your number one website.

You can add predefined rules by selecting the Predefined radio button in the very first window of the setup process. Open up the drop down list and you’ll see a whole bunch of predefined rules. These rules represent the Windows services that have been installed on your machine. You’ll see an option called Remote Desktop. This rule allows others to remotely connect to a computer. Click next and you’ll see some information on which port is going to be opened and some other parameters of the rule. If the predefined option needs more than one rule, such as Routing Volume Management, then all of them will be listed here.

The Custom rule type will give you a lot of freedom to define your rules. Click Next and you’ll see the window again where you can pick an application. Click Next to go to the Protocol and Ports window:

Protocol and ports inbound rule

Check out the Protocol type drop down list. Besides TCP and UDP which we discussed here you’ll see a whole range of other protocol types. E.g. the ICMPv4 protocol is used by the ping function by which you can ping a website in the command window. Select that protocol type. You can then click the Customize button where you can specify which ICMP packets to allow:

Customise ICMP packets

Select All ICMP types and click OK. Click Next to go to the Scope window. Here you can specify which local IP address this rule applies to and where we want to allow the traffic from – this is given in the remote IP section in the bottom half of the window. For now select the Any IP address option for both. The last three stages, i.e. Action, Profile and Name will all be familiar by now.

You can always come back later and update your rules. Just left-click a rule in the main window and select Properties from the context menu. This will open the Properties window:

Properties window for updating inbound rules

In this window you can specify a couple of options that were not available during the normal setup process. E.g. you can provide the authorised computers under the Computers tab. You can specify the users that are allowed to access this rule under the Users tab. Under the Scope tab you can define the IP addresses as we saw in the case of the Custom rule. As you see you can get to these options for any rule type but it’s offered during the setup phase only in the case of the Custom rule.

The Scope can be interesting if you want to setup the Remote Desktop predefined rule. You probably don’t want to let any computer get remote access to your computer, right. For any given machine in a network it is most likely enough to let other computers in the same subnet success it. E.g. I can remotely access the web servers that belong to the network of the company I work at. We don’t want anyone else to be able to access those computers.

In that scenario you can specify the correct IPs in the Remote IP address section:

Provide remote ips for remote access

The easiest way to achieve this is by the selecting the Predefined set of computers radio button and marking the local subnet option in the drop down list:

Select local subnet in remote access

You can allow other subnets as well by clicking the Add button again and filling in the IP ranges.

Network Address Translation and private IPs

We mentioned in the previous module of this series that with NAT we can have multiple private IPs corresponding to the same external public IP. We also said that we’re running short of public IP addresses so we can look at them as scarce and expensive commodities. Your ISP will probably only give you a single public IP although you can have several machines online in your home: your laptop, tablet, smart phone, your children’s computers etc. They will all have a private IP. Each private IP will be translated to the public IP in the outgoing traffic. Conversely the public IP will be translated to the correct private IP in the incoming traffic.

It actually makes sense that not all devices need public IPs. Why would anyone need to access your laptop from the public Internet?

The NAT device in the home environment is usually your router. It will translate the sessions back and forth between the internal and external addresses. In case you have a service on your home desktop that you want to make publicly available then the traffic coming to the your public IP will not be routed to the private IP of your desktop unless you set up a specific NAT rule on your routing device. This NAT rule will say that any inbound traffic coming to the external IP address be routed to a specific internal IP address.

Alternatively you can set up a port rule on the NAT device. The port rule will say that any traffic destined for a specific port be routed to an internal IP address. You can set up multiple port rules to direct incoming traffic to the correct private IP.

This can be useful if you want to host a website on your home desktop or you want to be able to remotely access a specific computer in your house.

The easiest way to find your public IP is use one of the many online IP services such as this or this. This sites will show you which IP address you’re coming from.

Elliot Balynn's Blog

A directory of wonderful thoughts

HarsH ReaLiTy

A Good Blog is Hard to Find

Software Engineering

Web development

Disparate Opinions

Various tidbits

chsakell's Blog

WEB APPLICATION DEVELOPMENT TUTORIALS WITH OPEN-SOURCE PROJECTS

Once Upon a Camayoc

Bite-size insight on Cyber Security for the not too technical.

%d bloggers like this: