SOLID principles in .NET revisited part 9: concentrating on enumerations

Introduction

In the previous post we streamlined our demo code so that it adheres to the SOLID principles. You may still spot bits and pieces that violate some design principle. It’s important to remark that in a large enterprise project it’s very difficult to attain 100% SOLID, if that state exists at all. You might be able to spot “deviations” even in the most well-maintained code bases.

In this post we’ll concentrate on enumerations. We saw before in this series how false usage of enums can easily lead to maintainability problems. Enumerations seem to be very popular due to their simplicity and how easily they can be used to create a list of valid values in a certain category.

We’ll first build up a short case study with all types of mistakes with special emphasis on enumerations. We’ll then improve the code in the next post.

Threshold evaluation model

In the case study we’ll simulate thresholds and how they can be evaluated. We’ll base our model on a hypothetical web performance test where we measure a range of statistics related to the behaviour of a web site during a load test. The user can specify conditions similar to the following before starting the test:

  • If the average URL response time exceeds 5 seconds then the performance test fails
  • If the number of successful URL calls per minute is less than 10 then the test fails

Note that the performance metric, such as “average URL response time” and the evaluation operator such as “is less than” can be extended to other values. These conditions are then evaluated at the end of the test based on the actual results. If one threshold is broken then the test fails.

Let’s see how this scenario can be modeled in code with deliberate drawbacks and design errors.

Code starting point

We all love enumerations, right? Performance metric types and operators wonderfully fit into the following enumerations:

public enum EvaluationOperator
{
	GreaterThan,
	LessThan
}
public enum PerformanceMetricType
{
	AverageResponseTime,
	UrlCallsPassedPerMinute
}

We can then build a Threshold object with the above two enumerations and a threshold limit parameter:

public class Threshold
{
	private EvaluationOperator _operator;
	private PerformanceMetricType _metric;
	private double _limit;

	public Threshold(EvaluationOperator evaluationOperator, PerformanceMetricType metric, double limit)
	{
		_operator = evaluationOperator;
		_metric = metric;
		_limit = limit;
	}

	public EvaluationOperator EvaluationOperator
	{
		get
		{
			return _operator;
		}
	}

	public PerformanceMetricType PerformanceMetricType
	{
		get
		{
			return _metric;
		}
	}

	public double Limit
	{
		get
		{
			return _limit;
		}
	}
}

Let’s say that the performance test statistics are summarised in the following custom object:

public class PerformanceSummary
{
	public double TotalPassedLoops { get; set; }
	public double TotalFailedLoops { get; set; }
	public double TotalPassedCallsPerMinute { get; set; }
	public double AverageNetworkThroughput { get; set; }
	public double AverageSessionTimePerLoop { get; set; }
	public double AverageResponseTimePerLoop { get; set; }
	public double WebTransactionRate { get; set; }
	public double AverageResponseTimePerPage { get; set; }
	public double TotalHttpCalls { get; set; }
	public double AverageNetworkConnectTime { get; set; }
	public double TotalTransmittedBytes { get; set; }
}

Another thing that many programmers love is putting “special” code in dedicated services no matter what. We’ll follow that practice and put the code that evaluates the thresholds in a MetricEvaluationService. We’ll wrap the threshold evaluation result in a custom object:

public class ThresholdEvaluationResult
{
	public bool ThresholdBroken { get; set; }
}

Here’s the implementation of MetricEvaluationService:

public class MetricEvaluationService
{
	public ThresholdEvaluationResult EvaluateThreshold(PerformanceSummary performanceSummary, Threshold threshold)
	{
		ThresholdEvaluationResult thresholdEvaluationResult = new ThresholdEvaluationResult();
		double thresholdValue = threshold.Limit;
		if (threshold.PerformanceMetricType == PerformanceMetricType.AverageResponseTime)
		{
			double averageResponseTimePerPage = performanceSummary.AverageResponseTimePerPage;
			switch (threshold.EvaluationOperator)
			{
				case EvaluationOperator.GreaterThan:
					if (thresholdValue < averageResponseTimePerPage)
					{
						thresholdEvaluationResult.ThresholdBroken = true;
					}
					break;
				case EvaluationOperator.LessThan:
					if (thresholdValue > averageResponseTimePerPage)
					{
						thresholdEvaluationResult.ThresholdBroken = true;
					}
					break;
			}
		}

		if (threshold.PerformanceMetricType == PerformanceMetricType.UrlCallsPassedPerMinute)
		{
			double urlCallsPassedPerMinute = performanceSummary.TotalPassedCallsPerMinute;
			switch (threshold.EvaluationOperator)
			{
				case EvaluationOperator.GreaterThan:
					if (thresholdValue < urlCallsPassedPerMinute)
					{
						thresholdEvaluationResult.ThresholdBroken = true;
					}
					break;
				case EvaluationOperator.LessThan:
					if (thresholdValue > urlCallsPassedPerMinute)
					{
						thresholdEvaluationResult.ThresholdBroken = true;
					}
					break;
			}
		}

		return thresholdEvaluationResult;
	}
}

We first check the metric type of the threshold and read the appropriate value from the PerformanceSummary object. We then branch the evaluation logic according to the metric type. Within each metric type we have a switch-block that evaluates the threshold according to the operator type. If the threshold limit is broken then we set the ThresholdBroken property of ThresholdEvaluationResult to true.

What’s wrong with this code?

There are several things that have gone wrong. From what we’ve seen on SOLID we now know that the EvaluateThreshold method will be difficult to maintain in the future. If a new metric type and/or a new operator is added to the requirements then we’ll need to extend the ‘if’ and ‘switch’ blocks as well. Obviously we’ll need to extend the PerformanceMetricType and EvaluationOperator enumerations as well. Furthermore we put the metric evaluation logic in a specialised class outside the Threshold class where it really belongs.

These are the major issues with this demo code that we’ll need some serious attention. We’ll make the code better in the next post.

View the list of posts on Architecture and Patterns here.

Advertisements

About Andras Nemes
I'm a .NET/Java developer living and working in Stockholm, Sweden.

2 Responses to SOLID principles in .NET revisited part 9: concentrating on enumerations

  1. Pingback: Architecture and patterns | Michael's Excerpts

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

ultimatemindsettoday

A great WordPress.com site

Elliot Balynn's Blog

A directory of wonderful thoughts

Robin Sedlaczek's Blog

Developer on Microsoft Technologies

HarsH ReaLiTy

A Good Blog is Hard to Find

Softwarearchitektur in der Praxis

Wissenswertes zu Webentwicklung, Domain-Driven Design und Microservices

the software architecture

thoughts, ideas, diagrams,enterprise code, design pattern , solution designs

Technology Talks

on Microsoft technologies, Web, Android and others

Software Engineering

Web development

Disparate Opinions

Various tidbits

chsakell's Blog

Anything around ASP.NET MVC,WEB API, WCF, Entity Framework & AngularJS

Cyber Matters

Bite-size insight on Cyber Security for the not too technical.

Guru N Guns's

OneSolution To dOTnET.

Johnny Zraiby

Measuring programming progress by lines of code is like measuring aircraft building progress by weight.

%d bloggers like this: