Posts Tagged ‘test automation pyramid’

Sharepoint & TDD: Getting started advice

July 1st, 2011 4 comments

I have a couple of people asking lately about starting on SharePoint. They’ve asked about how to move forward with unit and integration testing and stability. No one wants to go down the mocking route (typemock, pex and moles) and quite rightly. So here’s my road map:

The foundations: Hello World scripted and deployable without the GUI

  1. Get a Hello World SharePoint “app” – something that is packageable and deployable as a WSP
  2. Restructure the folders of the code away from the Microsoft project structure so that the root folder has src/, tools/, lib/ and scripts/ folders. All source and tests are in src/ folder. This lays the foundation for a layered code base. The layout looks like this sample application
  3. Make the compilation, packaging, installation (and configuration) all scripted. Learn to use psake for your build scripts and powershell more generally (particularly against the SharePoint 2010 API). The goal here is that devs can build and deploy through the command line. As such, so too can the build server. I have a suggestion here that still stands but I need to blog on improvements. Most notably, not splitting out tasks but rather keeping them in the same (because -docs works best). Rather than get reuse at the task level do it as functions (or cmdlets). Also, I am now moving away from the mix with msbuild that I blogged here and am moving them into powershell. There is no real advantage other than less files and reduced mix of techniques (and lib inclusions).
  4. Create a build server and link this build and deployment to it. I have been using TFS and TeamCity. I recommend TeamCity but TFS will suffice. If you haven’t created Build Definitions in TFS Workflow allow days-to-weeks to learn it. In the end, but only in the end, it is simple. Becareful with TFS, the paradigm here is that build server does tasks that devs don’t. It looks a nice approach. I don’t recommend it and there is nothing here by design that makes this inevitable. In TFS, you are going to need to build two build definitions: SharePointBuild.xaml and SharePointDeploy.xaml. The build is a compile, package and test. The deploy simply deploys to an environment – Dev, Test, Pre-prod and Prod. The challenge here is to work out a method for deploying into environments. In the end, I wrote a simple self-host windows workflow (xamlx) that did the deploying. Again, I haven’t had time to blog the sample. Alternatively, you can use psexec. The key is that for a SharePoint deployment you must be running on the local box and the most configurations have a specific service account for perms. So I run a service for deployment that runs under that service account.

Now that you can reliably and repeatably test and deploy, you are ready to write code!

Walking Skeleton

Next is to start writing code based on a layered strategy. What we have found is that we need to do two important things: (1) always keep our tests running on the build server and (2) attend to keeping the tests running quickly. This is difficult in SharePoint because a lot of code relates to integration and system tests (as defined by test automation pyramid). We find that integration tests that require setup/teardown of a site/features get brittle and slow very quickly. In this case, reduce setup and teardown in the the system tests. However, I am also had a case where the integration test showed that a redesigned object (that facaded SharePoint) would give better testability for little extra work.

  1. Create 6 more projects based on a DDD structure (Domain, Infrastructure, Application, Tests.Unit, Tests.Integration & Tests.System). Also rename your SharePoint project to UI-[Your App], this avoids naming conflicts on a SharePoint installation. We want to create a port-and-adapters application around SharePoint. For example, we can wrap property bags with repository pattern. This means that we create domain models (in Domain) and return them with repositories (in Infrastructure) and can test with integration tests.
  2. System tests: I have used StoryQ with the team to write tests because it allows for a setup/teardown and then multiple test scenario. I could use SpecFlow or nBehave just as easily.
  3. Integration tests: these are written classical TDD style.
  4. Unit tests: these are written also classical TDD/BDD style
  5. Javascript tests: we write all javascript code using a jQuery plugin style (aka Object Literal) – in this case, we use JSSpec (but I would now use Jasmine) – we put all tests in Tests.Unit but the actual javascript is still in the UI-SharePoint project. You will need two sorts of tests: Example for exploratory testing and Specs for the jasmine specs. I haven’t blogged about this and need to but is based on my work for writing jQuery plugins with tests.
  6. Deployment tests: these are tests that run once that application is deployed. You can go to an ATOM feed which returns the results of a series of tests that run against the current system. For example, we have the standard set with tells us the binary versions and which migrations (see below) have been applied. Others check whether a certain wsp has been deployed, different endpoints are listening, etc. I haven’t blogged this code and mean to – this has been great for testers to see if the current system is running as expected. We also get the build server to pass/fail a build based on these results.

We don’t use Pex and Moles. We use exploratory testings to ensure that something actually works on the page

Other bits you’ll need to sort out

  • Migrations: if you have manual configurations for each environment then you’ll want to script/automate this. Otherwise, you aren’t going to be one-click deployments. Furthermore, you’ll need to assume that each environment is in a different state/version. We use migratordotnet with a SharePoint adapter that I wrote – it is here for SharePoint 2010 – there is also a powershell runner in the source to adapt – you’ll need to download the source and compile. Migrations as an approach works extremely well for feature activation and publishing.
  • Application Configuration: we use domain models for configuration and then instantiate via an infrastructure factory – certain configs require SharePoint knowledge
  • Logging: you’ll need to sort of that Service Locator because in tests you’ll swap it out for Console.Logger
  • WebParts: can’t be in a strongly typed binary (we found we needed another project!)
  • Extension Methods to Wrap SharePoint API: we also found that we wrapped a lot of SharePoint material with extension methods

Other advice: stay simple

For SharePoint developers not used to object oriented programming, I would stay simple. In this case, I wouldn’t create code with abstractions that allowed you to unit test like this. I found in the end the complexity and testability outweighed the simplicity and maintainability.

Microsoft itself has recommended the Repository Pattern to facade the SharePoint API (sorry I can’t for the life of me find the link). This has been effective. It is so effective we have found that we can facade most SharePoint calls in two ways: a repository that returns/works with a domain concept or a Configurator (which has the single public method Process()).Anymore than that it was really working against the grain. All cool, very possible but not very desirable for a team which rotates people.

SharePoint TDD Series: Maintainability over Ease

December 16th, 2010 No comments

This series is part of my wider initiative around the Test Automation Pyramid. Previously I have written around Asp.Net MVC. This series will outline a layered code and test strategy in SharePoint.

SharePoint is a large and powerful system. It can cause problems in the enterprise environment incurring delays, cost and general frustration. Below is an overview of the main areas of innovation made in the source code to mitigate these problems. These problems are because the fundamental design of SharePoint is to design an “easy” system to code. It is easy in this sense is a system that can be configured up by general pool of developers and non-developers a like. Such a design however does not necessarily make the system maintainable. Extension, testability and stability may all suffer. In enterprise environments these last qualities are equally if not more important to the long-term value of software.

This series of posts outlines both the code used and the reasons behind it usage. As such it is a work in progress that will need to be referred to and updated as the code base itself changes.

Deployment Changes

Layered Code with testing

Testing on Event Receiver via declarative attributes

Testing Delegate controls which deploy jQuery

  • Part 5 – Client side strategies for javascript
  • Part 6 – Unit testing the jQuery client-side code without deploying to SharePoint
  • Part 2 – Unit testing the delegate control that houses the jQuery
  • Part 4 – Exploratory testing without automation is probably good enough

Cross-cutting concerns abstractions

Test Strategy in SharePoint: Part 2 – good layering to aid testability

November 28th, 2010 No comments

Test Strategy in SharePoint: Part 2 – good layering to aid testability

Overall goal: write maintainable code over code that is easy to write (Freeman and Price, 2010)

In Part 1 – testing poor layering is not good TDD I have argued that we need to find better ways to think about testing SharePoint wiring code that do not confuse unit and integration tests. In this post, I explain that outline a layering strategy for solutions resolving this problem. Rather than only one project for code and one for tests, I use 3 projects for tests and 4 for the code – this strategy is based on DDD layering and the test automation pyramid.

  • DDD layering projects: Domain, Infrastructure, Application and UI
  • Test projects: System, Integration and Unit

Note: this entry does not give code samples – the next post will – but focuses how the projects are organised within the Visual Studio solution, how they are sequenced when programming. I’ve included a task sheet that we use in Sprint Planning that we use a boiler plate list to mix and match scope of features. Finally, I have a general rave on the need for disciplined test-first and test-last development.


Here’s the quick overview of the layers, take a look further down for fuller overview.

  • Domain: the project which has representations of the application domain and have no references to other libraries (particularly SharePoint)
  • *Infrastructure: *this project references Domain and has the technology specific implementations. In this case, it has all the SharePoint API implementations
  • Application: this project is a very light orchestration layer. It is a way to get logic out of the UI layer to make it testable. Currently, we actually put all our javascript jQuery widgets in the project (that I will post about later because we unit (BDD-style) all our javascript and thus need to keep it away from the UI)
  • UI: this is the wiring code for SharePoint but has little else – this will more sense once you can see that we Integration test all SharePoint API and this code goes to Infrastructure and that we unit test any models, services or validation and that we put these in Domain. For example, with Event Receivers code in methods is rarely longer than a line or two long.

Test Projects

  • System Acceptance Test: Business focused test that describes the system – these tests should live long term reasonably unchanged
  • System Smoke Test: Tests that can run in any environment that confirm that it is up and running
  • Integration Test: Tests that have 1 dependency and 1 interaction that are usually against third-party API and in this case mainly the SharePoint API - these may create scenarios on each method
  • Unit Test: Tests that have no dependencies (or are mocked out) – model tests, validations, service tests, exception handling

Solution Structure

Below is the source folder of code in the source repository (ie not lib/, scripts/, tools/). The solution file (.sln) lives in the src/ folder.

Taking a look below, we see our 4 layers with 3 tests projects. In this sample layout, I have include folders which suggest that we have code around the provisioning and configuration of the site for deployment – see here for description of our installation strategy. These functional areas exists across multiple projects: they have definitions in the Domain, implementation in the Infrastructure and both unit and integration tests.

I have also included Logging because central to any productivity gains in SharePoint is to use logging and avoid using a debugger. We now rarely attach a debugger for development. And if we do it is not our first tactic as was the previous case.

You may also notice Migrations/ in Infrastructure. These are the migrations that we use with migratordotnet.

Finally, the UI layer should look familiar and this is a subset of folders.





Writing code in our layers in practice

The cadence of the developers work is also based on this separation. It generally looks like this:

  1. write acceptance tests (eg given/when/then)
  2. begin coding with tests
  3. sometimes starting with Unit tests – eg new Features, or jQuery widgets
  4. in practice, because it is SharePoint, move into integration tests to isolate the API task
  5. complete the acceptance tests
  6. write documentation of SharePoint process via screen shots

We also have a task sheet for estimation (for sprint planning) that is based around this cadence.

Task Estimation for story in Scrum around SharePoint feature

A note on test stability

Before I finish this post and start showing some code, I just want to point out that getting stable deployments and stable tests requires discipline. The key issues to allow for are the usual suspects:

  • start scripted deployment as early as possible
  • deploy with scripts as often as possible, if not all the time
  • try to never deploy or configure through the GUI
  • if you are going to require a migration (GUI-based configuration) script it early because while it is faster to do through the GUI this is a developer-level (local) optimisation for efficiency and won’t help with stabilisation in the medium term
  • unit tests are easy to keep stable – if they aren’t then you are serious in trouble
  • integration tests are likely to be hard to keep stable – ensure that you have the correct setup/teardown lifecycle and that you can fairly assume that the system is clean
  • as per any test, make sure integration tests are not dependent on other tests (this is standard stuff)
  • system smoke tests should run immediately after an installation and should be able to be run in any environment at any time
  • system smoke tests should not be destruction precisely because they are run in any environment including production to check that everything is working
  • system smoke tests shouldn’t manage setup/teardown because they are non-destructive
  • system smoke tests should be fast to run and fail
  • get all these tests running on the build server asap

Test-first and test-last development

TDD does not need to be exclusively test-first development. I want to suggest that different layer require different strategies but most importantly there is a consistency to the strategy to help establish cadence. This cadence is going to reduce transaction costs – knowing when done, quality assurance for coverage, moving code out of development. Above I outlined writing code in practice: acceptance test writing, unit, integration and then acceptance test completion.

To do this I test-last acceptance tests. This means that as developers we write BDD style user story (give/when/then) acceptance tests. While this is written first, it rarely is test-driven because we might not then actually implement the story directly (although sometimes we do). Rather we park it. Then we move into the implementation which is encompassed by the user story but we then move into classical unit test assertion mode in unit and integration tests. Now, there is a piece of code that it clearly unit testable (models, validation, services) this is completed test first – and we pair it, we use Resharper support to code outside-in. We may also need to create data access code (ie SharePoint code) and this is created with integration tests. Interestingly, because it is SharePoint we break many rules. I don’t want devs to write Infrastructure code test last but often we need to spike the API. So, we actually spike the code in the Integration test and then refactor to the Infrastructure as quickly as possible. I think that this approach is slow and that we would be best to go to test-first but at this stage we are still getting a handle on good Infrastructure code to wrap the SharePoint API. The main point is that we don’t have untested code in Infrastructure (or Infrastructure code lurking in the UI). These integration tests in my view are test-last in most cases simply because we aren’t driving design from the tests.

At this stage, we have unfinished system acceptance tests, code in the domain and infrastructure (all tested). What we then do is hook the acceptance test code up. We do this instead of hooking up the UI because then we don’t kid ourselves whether or not the correct abstraction has been created. In hooking up the acceptance tests, we can simply hook up in the UI. However, the reverse has often not been the case. Nonetheless, the most important issue that we have hooked up our Domain/Infrastructure code by two clients (acceptance and UI) and this tends to prove that we have a maintainable level of abstraction for the current functionality/complexity. This approach is akin to when you have a problem and you go to multiple people to talk about it. By the time you have had multiple perspectives, you tend to get clarity about the issues. Similarly, in allowing our code to have multiple conversations in the form of client libraries consume them, we know the sorts of issues are code are going have – and hopefully, because it is software, we have refactored the big ones out (ie we can live the level of cohesion and coupling for now).

I suspect for framework or even line of business applications, and SharePoint being one of many, we should live with the test-first and test-last tension. Test-first is a deep conversation that in my view covers off so many of the issues. However, like life, these conversations are not always the best to be had every time. But for the important issues, they will always need to be had and I prefer to have them early and often.

None of this means that individual developers get to choose which parts get test-first and test-last. It requires discipline to use the same sequencing for each feature. This takes time for developers to learn and leadership to encourage (actually, enforce, review and refine). I am finding that team members can learn the rules of the particular code base in between 4-8 weeks if that is any help.


October 24th, 2010 2 comments

I was in a pairing session and we decided that that this syntax described here is way too noisy:

new Banner { Name="" }.SetupWithDefaultValuesFrom(ValidMO.Banner).IsValid().ShouldBeFalse()

So we rewrote it using anonymous objects which made it nicer to read:

a.Banner.isValid().with(new { Name="john" })

A couple of notes on this style:

  • I am using the style of lowercase for test helpers. As per above a, isValid and with.
  • I am using nested classes to create a DSL feel: eg as@a@._object_ replaces the idea of ValidOM._object_
  • I am wrapping multiple assertions in isValid and isInvalid (they are wrappers that call the domain object IsValid and provide assertions – these assertions can be overwritten for different test frameworks which at this stage is MSTest)

Unit test object builder

This code has two tests: Valid and InvalidHasNoName. The first test checks that I have setup (and maintained) the canonical model correctly and that it behaves well with the validator. The second test demonstrates testing invalid by exception. Here there rule is that the banner is invalid if it has no name.

  using Domain.Model;
  using Microsoft.VisualStudio.TestTools.UnitTesting;

  namespace UnitTests.Domain.Model
      public class BannerPropertyValidationTest
          public void Valid()

          public void InvalidHasNoName()
              a.Banner.isInvalid().with(new {Name = ""});

Here’s the canonical form that gives us the object as a.Banner:

  namespace UnitTests.Domain
      public static class a
          public static Banner Banner
              get  { return new Banner  { Name = "Saver" }; }

At this stage, you should have a reasonable grasp of the object through the tests and what we think is exemplar scenario data. Here’s the model with validations. I won’t show but there is an extension methods

  using Castle.Components.Validator;

  namespace Domain.Model
      public class Banner
        public virtual string Name { get; set; }

Builder extensions: with

Now for the extensions that hold this code together. It is a simple piece of code that takes the model and anonymous class and merges them. As a side note I originally thought that I would use LinFu to do this but it can’t duck type against concrete classes, only interfaces. And I don’t have interfaces on a domain model.

	using System;

	namespace Domain
	    public static class BuilderExtensions
	        public static T with<T>(this T model, object anon) where T : class
	            foreach (var anonProp in anon.GetType().GetProperties())
	                var modelProp = model.GetType().GetProperty(anonProp.Name);
	                if (modelProp != null)
	                    modelProp.SetValue(model, anonProp.GetValue(anon, null), null);
	            return model;

So if you want to understand how the different scenarios it works on, here’s the tests:

	using Domain;
	using Domain.Model;
	using Microsoft.VisualStudio.TestTools.UnitTesting;

	namespace Tests.Unit
	    public class WithTest
	        public void UpdateBannerWithName_ChangesName()
	            var a = new Banner { Name = "john" };
	            var b = a.with(new { Name = "12345" });
	            Assert.AreEqual(b.Name, "12345");

	        public void UpdateBannerWithEmptyName_ChangesName()
	            var a = new Banner { Name = "john" };
	            var b = a.with(new { Name = "" });
	            Assert.AreEqual(b.Name, "");

	        public void UpdateBannerWithEmptyName_ChangesNameAsRef()
	            var a = new Banner { Name = "john" };
	            a.with(new { Name = "" });
	            Assert.AreEqual(a.Name, "");

	        public void UpdateBannerChainedWith_ChangesNameAndDescriptionAsRef()
	            var a = new Banner { Name = "john" };
	            a.with(new { Name = "" }).with(new { Description = "hi" });
	            Assert.AreEqual(a.Name, "");
	            Assert.AreEqual(a.Description, "hi");

	        public void UpdateBannerWithName_ChangesNameOnly()
	            var a = new Banner { Name = "john", Description = "ab" };
	            var b = a.with(new { Name = "12345" });
	            Assert.AreEqual(b.Name, "12345");
	            Assert.AreEqual(b.Description, "ab");

	        public void UpdateBannerWithPropertyDoesntExist_IsIgnored()
	            var a = new Banner { Name = "john" };
	            var b = a.with(new { John = "12345" });
	            // nothing happens!

Builder extensions: validators

Now that we have the builder in place, we want to be able to do assertions on the new values in the object. Here’s what we are looking for a.Banner.isValid().with(new{Name="john"}). The complexity here is that we want to write the isValid or isInvalid before the with. We felt that it read better. This adds a little complexity to the code – but not much.

The general structure below goes something like this:

  1. add an extension method for the isValid on a domain object
  2. that helper returns a ValidationContext in the form of a concrete Validator with a test assertion
  3. we need to create another with on a ValidationContext to run the Validator
  4. Finally, in the with we chain the with with the anonymous class and do the assert
	using Domain;
	using Domain.Model;
	using Microsoft.VisualStudio.TestTools.UnitTesting;

	namespace Tests.Unit
	    public static class ValidationContextExtensions
	        public static ValidationContext<T> isInvalid<T>(this T model)
	            return new Validator<T>(model, (a) => Assert.IsFalse(a));

	        public static ValidationContext<T> isValid<T>(this T model)
	            return new Validator<T>(model, (a) => Assert.IsTrue(a));

	        public static T with<T>(this ValidationContext<T> model, object exceptions) where T : class
	            return model.Test;

	    public interface ValidationContext<T>
	        T Test { get;  set; }
	        Action<bool> Assertion { get;  set; }
	        void Assert(bool isValid);

	    public class Validator<T> : ValidationContext<T>
	        public Validator(T test, Action<bool> assertion) {
	            Test = test;
	            Assertion = assertion;
	        public T Test { get; set; }
	        public virtual Action<bool> Assertion { get; set; }
	        public void Assert(bool value)

Just a last comment about the interface and implementation. I have gone for naming the interface differently to the interface (ie not IValidator) because I want to avoid using the I@ convention and see where it takes me. In this case, the @with needs to be able to chain itself to something – to me this is the validation context. This context then has a validator in it (eg valid/invalid). In this case the interface isn’t merely a contract it is actually being used concretely itself without an implementation. In fact, we could almost have the interface living without its properties and methods definitions at this stage, or perhaps the interface could have become an abstract class – either solution would entail less code (but slightly weird conventions).

Righto, a bit more showing of tests last:

	using Domain.Model;
	using Microsoft.VisualStudio.TestTools.UnitTesting;

	namespace Tests.Unit

	    public class ValidationContextTest
	        private const string NameCorrectLength = "johnmore_than_six";
	        private const string NameTooShort = "john";

	        public void UpdateBannerWithInvalidExtensionOnWith()
	            var a = new Banner { Name = NameTooShort };
	            a.isInvalid().with(new { Name = "" });

	        public void UpdateBannerWithValidExtensionOnWith()
	            var a = new Banner { Name = NameTooShort };
	            a.isValid().with(new { Name = NameCorrectLength });

	        public void NewBannerIsValidWithoutUsingWith()
	            var a = new Banner { Name = NameTooShort };

	        public void NewBannerIsInvalidWithoutUsingWith()
	            var a = new Banner { Name = NameCorrectLength };

Just a final bit, if you are wondering between IsValid with caps and isValid. Here’s validator running code on the domain model that we are wrapping in our validators.

	using Castle.Components.Validator;

	namespace Domain.Model
	    public static class ValidationExtension
	        private static readonly CachedValidationRegistry Registry = new CachedValidationRegistry();

	        public static bool IsValid<T>(this T model) where T : class 
	            return new ValidatorRunner(Registry).IsValid(model);

I hope that helps.


August 24th, 2010 No comments

Using web services is easy. That is if we listen to vendors. Point to a wsdl and hey presto easy data integration. Rarely is it that easy for professional programmers. We have to be able to get through to endpoints via proxies, use credentials and construct the payloads correctly and all this across different environments. Add the dominance of point-and-click WSDL integration, many developers I talk to do really work at the code level for these types of integration points or if they do it is to the extent of passive code generators. So to suggest TDD on web services is, at best, perplexing. Here I am trying to explain how the test automation pyramid helps how to TDD web services.

Test-first web services?

Can we do test-first development on web services? or is test-first webservices an oxymoron? (ok, assume each is only one word ;-) ) My simple answer is no. But the more complex answer is that you have to accept some conditions to make it worthwhile. These conditions to me are important concepts that lead to do test-first rather than the other way around.

One condition is that my own domain concepts remain clean. To keep it clean this means I keep the domain from the web service from being within my domain. For example, if you have a look at the samples demonstrating how to use a web service the service proxy and the domain is right there in the application – so for web application, you’ll see references in the code behind. This worries me. When the web service changes and the code is auto generated this new code is likely then to be littered throughout that code. Another related condition is that integration points should come through my infrastructure layer because it aids testability. So, if I can get at the service reference directly I am in most cases going to have an ad hoc error handling, logging and domain mapping strategy.

Another condition is that there is an impedance mismatch between our domain the service domain. We should deal with this mismatch as early as possible and as regularly as possible. We also should deal with these issues test-first and in isolation from the rest of the code. In practice, this is a mapper concern and we have vast array of options (eg automapper library, LINQ). These options are likely to depend on the complexity of the mapping. For example, if we use WSE3 bindings then we will be mapping from a XML structure into an object. Here we’ll most likely do the heavy lifting with a XML parser such as System.Linq.Xml. Alternatively, if we are using the ServiceModel bindings then we will be mapping object to object. If these models follow similar conventions we might get away with automapper and if not we are likely to roll our own. I would suggest if you are rolling your own that the interface of automapper might be nice to follow though (eg Mapper.Map<T, T1>(data)).

I think there is a simple rule of thumb with mapping domains: you can either deal with the complexity now or later. But, regardless, you’ll have to deal with the intricacies at some point. Test-first demands that you deal with them one-at-a-time and now. Alternatively, you delay and you’ll have to deal with at integration time and this often means that someone else is finding them – and in its worst case, it is in production! I am always surprised that how many “rules” there are when mapping domains and also how much we can actually do before we try to integrate. I was just working with a developer who has done a lot of this type of integration work but never test-first. As we proceeded test-first, she starting reflecting on how much of this mapping work that she usually did under test conditions could be moved forward into the development. On finishing that piece of work we also we surprised how many tests were required to do the mapping – a rough count was 150 individual tests across 20 test classes. This was for mapping two similar domains each with 5 domain models.

What code do you really need to write?

So let’s say that you accept that you don’t want to have a direct reference to the client proxy, what else it is needed? Of course, the answer is it depends. It depends on:

  • client proxy generated (eg WSE3 vs ServiceModel): when using the client proxy the WSE3 will require a little more inline work around, say, Proxy, SetClientCredential methods whereas the ServiceModel can have it inline or be delegated to the configuration file
  • configuration model (eg xml (app.config) vs fluent configuration (programatic)): you may want to deal with configuration through a fluent configuration regardless of an app.config. This is useful for configuration checking and logging within environments. Personally, the more self checking you now for configuration settings the easier code will be to deploy through the environments. Leaving configuration checking and setting to solely to operations people is good example of throwing code over the fence. Configuration becomes someone else’s problem.
  • reference data passed with each request: Most systems require some form reference data that exists with each and every request. I prefer to avoid handling that at the per request level but rather when service instantiating. This information is less likely to the credential information.
  • security headers: you may need to add security headers to your payload. I forget which WS* standard this relates to but it is a strategy just like proxies and credentials that needs to be catered for. WSE3 and ServiceModel each have their own mechanisms to do this.
  • complexity of domain mappings: you will need to call the mapping concern to do this work but should only be a one-liner because you have designed and tested this somewhere else. It is worth noting the extent of difference though. With simple pass through calls some mappings are almost a simple value. Take for example, a calculation service upon return may be a simple value. However, with domain synching the domain mapping are somewhat a complex set of rules to get the service to accept your data. Actually, I often
  • error handling strategy: we are likely to want to catch exceptions and throw our kind so that we capture further up in the application (eg UI layer). With the use of lambdas this is straightforward and clean to try/catch method calls to the service client.
  • logging strategy particularly for debugging: you are going to need to debug payloads at some stage. Personally, I hate stepping through and that doesn’t help outside of development environments. So, a good set of logging is needed to. I’m still surprised how often code doesn’t have this.

Test automation pyramid

Now that we know what code we need to write, what types of tests are we likely to need? If you are unfamiliar with the test automation pyramid or my specific usage see test automation pyramid review for more details.

System Tests

  • combine methods for workflow
  • timing acceptance tests

Integration Tests

  • different scenarios on each method
  • exception handling (eg bad credentials)

Unit Tests

  • Each method with mock (also see mocking webservices)
  • exception handling on each method
  • mapper tests

More classes!

Without going into implementation details, all of this means that there is a boiler plate of likely classes. Here’s what I might expect to see in one of my solutions. A note on conventions: a slash ‘/’ denotes folder rather than file; <Service>\<Model>\<EachMethod> is specific to your project and indicates that there is likely to be one or more of that type; names with a .xml are xml and all others if not folders are .cs files.

    Service References/                  <-- auto generated from Visual Studio ServiceModel
    Web Reference/                       <-- auto generated from Visual Studio WSE3
          CredentialTest                  <-- needed if header credential
          Request.xml                     <-- needed if WSE3
          Response.xml                    <-- needed if WSE3
          Credential<Model>ObjectMother   <-- needed if header credential

That’s a whole lot more code!

Yup, it is. But each concern and test is now separated out and you can work through them independently and systematically. Here’s my point, you can deal with these issues now and have a good test bed so that when changes come through you have change tolerant code and know you’ve been as thorough as you can be with what you presently know. Or, you can deal with it later at integration time when you can ill afford to be the bottle neck in the high visible part of the process.

Potential coding order

Now, that we might have a boilerplate of options, I tend to want to suggest an order. With the test automation pyramid, I suggest to design as a sketch and domain model/services first. Then system test stub writing, then come back through unit and integration tests before completing\implementing the system test stubs. Here’s my rough ordered list:

  1. have a data mapping document – Excel, Word or some form table is excellent often provided by BAs – you still have to have some analysis of differences between your domain and their’s
  2. generate your Service Reference or Web Reference client proxy code – I want to see what the models and endpoints – I may play with them via soapUI – but usually leave that for later if at all needed
  3. write my system acceptance tests stubs – here I need to understand how these services fit into the application and what the end workflow is going to be. For example, I might write these as user story given/when/then scenarios. I do not try and get these implement other than compiling because I will come back to them at the end. I just need a touch point of the big picture.
  4. start writing unit tests for my Service Client – effectively, I am doing test-first creation of my I<Service>Client making sure that I can use each method with an injected Mock/Fake<Service>Client.
  5. unit test out my mappers – by now, I will be thinking about the request/response cycle and will need to creating ObjectMothers to be translated into the service reference domain model to be posted. I might be working in the other direction too – which is usually a straightforward mapping but gets clearer once you start integration tests.
  6. integration test on each method – once I have a good set of mappers, I’ll often head out to the integration point and check out how broken my assumptions about the data mapping are. Usually, as assumptions break down I head back into the unit test to improve the mappers so that the integration tests work – this is where the most work occurs!
  7. at this point, I now need good DEBUG logging and I’ll just ensure that I am not using the step-through debugger but rather good log files at DEBUG level.
  8. write system timing tests because sometimes there is an issue that the customer needs to be aware of
  9. system tests that can be implemented for the unit/integration tests for the method implements thus far
  10. add exception handling unit tests and code
  11. add credential headers (if needed)
  12. back to system tests and finish off and implement the original user stories
  13. finally, sometimes, we need to create a set of record-and-replay tests for other people testing. SoapUI is good for this and we can easily save them in source for later use.

Some problems

Apart from having presented any specific code, here are some problems I see:

  • duplication: your I<Service>Client and the proxy generated once is probably very similar with the difference it that your one returns your domain model objects. I can’t see how to get around this given your I<Service>Client is an anti-corruption class.
  • namespacing/folders: I have suggested ServiceReference/<Service>/ as folder structure. This is a multi-service structure so you could ditch the <Service> folder if you only had one.
  • Fixtures.ServiceReference.<Service>.Mock/Fake<Service>Client: this implementation is up to you. If you are using ServiceModel then you have an interface to implement against. If you are using WSE3 you don’t have an interface – try extending through @partial@s or wrapping with another class.

Test Automation Pyramid – review

August 4th, 2010 No comments

Test Automation Pyramid

This review has turned out to be a little longer than I expected. I have been using my own layering of the test automation pyramid and the goal was to come back and check it against the current models. I think that my usage is still appropriate but because it is quite specific it wouldn’t work for all teams. So I might pick and choose between the others when needed. If you write line of business web applications which tend to have integration aspects then I think this might be useful. The key difference from the other models is my middle layer in the pyramid is perhaps a little more specific as I try and recapture the idea of integration tests within the context of xUnit tests. Read on if you are interested – there are some bullet points at the end if you are familiar with the different models. Just a disclaimer that reference point is .Net libraries and line-of-business web applications because of my work life at the moment. I am suggesting something slightly different than say this approach but doesn’t seem that different to this recent one and you can also find it implicitly in Continuous Delivery by Farley and Humble.

A little history

The test automation pyramid has been credited to Mike Cohn of Mountain Goat Software. It is a rubric concerning three layers that is foremostly concerned with how we think about going about testing – what types of tests we run and how many of each. While various authors, have modified the labels, it has changed very little and I suspect that is because its simplicity captures people’s imagination. In many ways, it makes a elementary point: automating your manual testing approach is not good enough particularly if that testing was primarily through the GUI. Mike explains that this was the starting point for the pyramid, he says,

The first time I drew it was for a team I wanted to educate about UI testing. They were trying to do all testing through the UI and I wanted to show them how they could avoid that. Perhaps since that was my first use it, that’s the use I’ve stuck with most commonly.

Instead, change the nature of the tests – there is a lot of common sense to this. I’ll take for instance a current web application I am working on. I have a test team that spend a lot of time testing the application through the browser UI. It is a complex insurance application that connects to the company’s main system. Testers are constantly looking at policies and risks and checking data and calculations. In using the UI, however, they have tacit knowledge of what they are looking at and why – they will have knowledge about the policy from the original system and the external calculation engine. It is this tacit knowledge and the knowledge about the context – in this case, a specific record (policy) that meets a criteria – that is difficult to automate in its raw form. Yet, each time that do a manual test in this way the company immediately looses its investment in this knowledge but allowing it to be in the head of only one person. Automating this knowledge is difficult however. All this knowledge is wrapped into specific examples found manually. The problem when you automate this knowledge is that it is example-first testing and when you are testing against data like this today’s passing test may fail tomorrow. Or even worse the test may fail the day after when you can’t remember a lot about the example! Therefore, the test automation pyramid is mechanism to get away from a reliance on brittle end-to-end testing and the desire to simply automate your manual testing. It moves towards layering your testing – making multiple types of tests, maintaining boundaries, testing relationships across those boundaries and being attentive to the data which flows.

Here’s the three main test automation pyramids out there in the wild. Take a look at these and then I will compare them below in a table and put them alongside my nomenclature.







Test Automation Pyramid: a comparison

   Cohn       Mezaros        Crispin            Suggested
                            (manual)          (exploratory)
     UI**      System          GUI               System
   Service    Component    Acceptance (api)    Integration
    Unit        Unit        Unit/Component         Unit  

** in a recent blog post and subsequent comments Mike agrees with others that the UI layer may be better called the “End-to-End” tests. Mike points out that when he started teaching this 6-7 years ago the term UI was the best way to explain to people the problem. And that problem being that automating manual tests meaning automating GUI tests and that the result is (see Continuous Testing: Building Quality into Your Projects):

Brittle. A small change in the user interface can break many tests. When this is repeated many times over the course of a project, teams simply give up and stop correcting tests every time the user interface changes.
Expensive to write. A quick capture-and-playback approach to recording user interface tests can work, but tests recorded this way are usually the most brittle. Writing a good user interface test that will remain useful and valid takes time.
Time consuming. Tests run through the user interface often take a long time to run. I’ve seen numerous teams with impressive suites of automated user interface tests that take so long to run they cannot be run every night, much less multiple times per day.

Related models to test automation pyramid

Freeman & Price: while not a pyramid explicitly, they build a hierarchy of tests that correspond to some nested feedback loops:

Acceptance: Dos the whole system work?
Integration: Does our code work against the code we can’t change?
Unit: Do our objects do the right thing, are they convenient to work work?

Stephens & Rosenberg: Design Driven Testing: Test Smarter, Not Harder have four level approach in which the tests across the layers increase in granularity and size as you move down to unit tests – and business requirement tests are seen as manual. Although their process of development is closely aligned to the V model of development (see pp.6-8)

four principal test artifacts: unit tests, controller tests, scenario tests, and business requirement tests. As you can see, unit tests are fundamentally rooted in the design/solution/implementation space. They’re written and “owned” by coders. Above these, controller tests are sandwiched between the analysis and design spaces, and help to provide a bridge between the two. Scenario tests belong in the analysis space, and are manual test specs containing step-by-step instructions for the testers to follow, that expand out all sunny-day/rainy-day permutations of a use case. Once you’re comfortable with the process and the organization is more amenable to the idea, we also highly recommend basing “end-to-end” integration tests on the scenario test specs. Finally, business requirement tests are almost always manual test specs; they facilitate the “human sanity check” before a new version of the product is signed off for release into the wild.

The pyramid gets it shape because:

  • Large numbers of very small unit tests – set a foundation on simple tests
  • Smaller number of functional tests for major components
  • Even fewer tests for the entire application & workflow

With a cloud above because:

  • there are always some tests that need not or should not be automated
  • you could just say that system testing requires manual and automated testing

General rules:

  • need multiple types of tests
  • tests should be complimentary
  • while there looks like overlap, different layers test at a different abstraction
  • it is harder to introduce a layered strategy

So, as Mezaros points out, multiple types of tests aware of boundaries are important:

Mezaros - Why We Need Multiple Kinds of Tests

Compare and Contrast

Nobody disagrees that the foundation are unit tests. Unit tests tell us exactly which class/method is broken. xUnit testing is dominant here as an approach and this is a well documented approach regardless of classical TDD, mocking TDD or fluent-language style BDD (eg should syntax helpers, or given/when/then or given/it/should). Regardless of the specifics, each test tests one thing, is written by the developer, is likely to have a low or no setup/teardown overhead, should run and fail fast and is the closest and most fine grained view of the code under test.

There are some slight differences that may be more than nomenclature. Crispin includes component testing in the base layer whereas Mezaros puts it as the next layer (which he defines the next layer as functional tests of major components). I’m not that sure that in practice they would look different. I suspect Crispin’s component testing would require mocking strategies to inject dependencies and isolate boundaries. Therefore, the unit/component of Crispin suggests to me that the unit test can be on something larger than a “unit” as long as it still meets the boundary condition of being isolated. One question I would have here for Crispin is in the unit tests would you include for instance tests with a database connection? Does this count as a component or an api?

The second layer starts to see some nuances but tends toward telling us which components are at fault. These are tested by explicitly targeting the programatic boundaries of components. Cohn called his services while Crispin, API. Combine that with Mezaros’ component and all are suggesting that there is size of form within the system – a component, a service – is bigger than the unit but smaller than the entire system. For Mezaros, component tests would also test complex business logic directly. For Cohn, the service layer is a the api (or logical layer) in between the user interface and very detailed code. He says, “service-level testing is about testing the services of an application separately from its user interface. So instead of running a dozen or so multiplication test cases through the calculator’s user interface, we instead perform those tests at the service level”. So what I think we see in the second layer is expedient testing in comparison to the UI. Although what I find hard to see is whether or not this level expects dependencies must be running within an environment. For example, Mezaros and Crispin both suggest that we might be running FIT which often require a working environment in place (because of the way they are written). In practice, I find this a source of confusion that is worth considering. (This is what I return to in a clarification of integration testing.)

On to the top layer. This has the least amount of tests and works across the entire application and focuses often on workflow. Cohn and Crispin focus on the UI to try and keep people about keeping these tests small. Mezaros makes a more interesting move to subsume that UI testing into system testing. Inside system tests he also uses acceptance testing (eg FIT) and manual tests. Crispin also account for manual testing in the cloud above the pyramid. Either way manual testing is still an important part of a test automation strategy. For example the use of record-and-replay tools, such as, selenium or WATIR/N have made browser-based testing UI testing easier. You can use these tools to help script browser testing and then you have the choice whether to automate or not. However, UI testing still suffers from entropy and hence is the hardest to maintain over time – particularly if they are dependent data.

Here are some of the issues I find coming up out of these models.

  • the test automation pyramid requires a known solution architecture with good boundary definition that many teams haven’t made explicit (particularly through sketching/diagrams)
  • existing notions of the “unit” test are not as subtle as an “xUnit” unit test
  • people are too willing to accept that some boundaries aren’t stub-able – often pushing unit tests into the service/acceptance layer
  • developers are all too willing to see (system) FIT tests as the testers (someone else’s) responsibility
  • it is very hard to get story-test driven development going and the expense of FIT quickly outweighs benefits
  • we can use xUnit type tests for system layer tests (eg xUnit-based StoryQ or Cucumber)
  • you can combine different test runners for the different layers (eg xUnit, Fitnesse and Selenium and getting a good report)
  • developers new to TDD and unit testing tend to use test-last strategies that are of the style best suited for system layer tests – and get confused why they see little benefit

Modified definition of the layers

Having worked with these models, I want to propose a slight variation. This variation is in line with Freeman and Pryce.

  • Unit – a test that has no dependencies (do our objects do the right thing, are they convenient to work with?)
  • Integration – a test that has only one dependency and tests one interaction (usually, does our code work against code that we can’t change?)
  • System – a test that has one-or-more dependencies and many interactions (does the whole system work?)
  • Exploratory – test that is not part of the regression suite, nor should have traceability requirements

Unit tests

There should be little disagreement about what good xUnit tests: it is well documented although rarely practised in line-of-business applications. As Michael Feathers says,

A test is not a unit test if:
* it talks to the database
* it communicates across the network
* it touches the file system
* it can’t run at the same time as any of your other unit tests
* you have to do special things to your environment (such as editing config files) to run it
Tests that do these things aren’t bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to keep them separate from true unit tests so that we can run the unit tests quickly whenever we make changes.

Yet often unit tests are difficult because it works against the grain of the application framework. I have documented my approach elsewhere for an MVC application that I not only unit test the usual suspects of domain models, services and various helpers such as configurators or data mappers but also my touch points with the framework such as routes, controllers, action redirects (see MvcContrib which I contributed to with the help of a colleague). I would expect unit test of this kind to be test-first because it about design with a touch of verification. If you can’t test-first this type of code base it might be worthwhile spending a little understanding what part of the system you are unable understand (sorry mock out!). Interestingly, I have found that I can test-first BDD-style parts of my GUI when in the browser and using jQuery and JsSpec (or QUnit) (here’s an example) – you have to treat javascript as first-class citizen at this point so here’s a helper for generating the scaffolding for the jquery plugin.

So I have a check list unit tests I expect to find in a line-of-business unit test project:

  • model validations
  • repository validation checking
  • routes
  • action redirects
  • mappers
  • any helpers
  • services (in the DDD sense)
  • service references (web services for non-Visual Studio people)
  • jquery

Integration tests

Integration tests may have a bad name if you agree with JB Rainsberger that integration tests are a scam because you would see integration tests as exhaustive testing. I agree with him on that count so I have attempted to reclaim integration testing and use the term quite specifically in the test automation pyramid. I use it to mean to test only one dependency and one interaction at a time. For example, I find this approach helpful with testing repositories (in the DDD sense). I do integration test repositories because I am interested in my ability to manage an object lifecycle through the database as the integration point. I therefore need tests that prove CRUD-type functions through an identity – Eric Evans (DDD, p.123) has a diagram of an object’s lifecycle that is very useful to show the links between an object lifecycle and the repository.


Using this diagram, we are interested in the identity upon saving, retrieval, archiving and deletion because these touch the database. For integration tests, we are likely to less interested in creation and updates because they wouldn’t tend to need the dependency of the database. These then should be pushed down to the unit tests – such as validation checking on update.

On the web services front, I am surprised how often I don’t have integration tests for them because the main job of my code is (a) test mappings between the web service object model and my domain model which is a unit concern or (b) the web services use of data which I would tend to make a system concern.

My checklist in the integration tests is somewhat less than unit tests:

  • repositories
  • service references (very lightly done here just trying to think about how identity is managed)
  • document creation (eg PDF)
  • emails

System tests

System testing is the end-to-end tests and it covers any number of dependencies to satisfy the test across scenarios. We then are often asking various questions in the system testing, is it still working (smoke)? do we have problems when we combine things and they interact (acceptance)? I might even have a set of tests are scripts for manual tests.

Freeman and Pryce (p.10) write:

There’s been a lot of discussion in the TDD world over the terminology for what we’re calling acceptance tests: “functional tests”, “system tests.” Worse, our definitions are often not the same as those used by professional software testers. The important thing is to be clear about hour intentions. We use “acceptance tests” to help us, with the domain experts, understand and agree on what we are going to build next. We also use them to make sure that we haven’t broken any existing features as we continue developing. Our preferred implementation of the “role” of acceptance testing is to write end-to-end tests which, as we just noted, should be as end-to-end as possible; our bias often leads us to use these interchangeably although, in some cases, acceptance tests might not be end-to-end.

I find smoke tests invaluable. To keep them cost effective, I also keep them slim and often throw some away once I have found that the system has stabilised and the cost of maintaining is greater than the benefit. Because I tend to write web applications my smoke tests are through the UI and I use record-and-replay tools such as selenium. My goal with these tests is to target parts of the application that are known to be problematic so that I get as earlier a warning as possible. These types of system tests must be run often – and often here means starting in the development environment and then moving through the build and test environments. But running in all these environment comes with a significant cost as each tends to need its own configuration of tools. Let me explain because the options all start getting complicated but may serve as a lesson for tool choice. In one project, we used selenium. So we using selenium IDE in Firefox for record and replay. Developers use this for their development/testing (and devs have visibility of these tests in the smoke test folder). These same scripts were deployed through onto the website so that testing in testing environments could be done through the browser (uses selenium core and located in the web\tests folder – same tests but available for different runners). On the build server, the tests through seleniumHQ (because we ran Hudson build server) – although in earlier phases we had actually used selenium-RC with NUnit but I found that in practice hard to maintain [and we could have used Fitnesse alternatively and selenese!]. As an aside, we found that we also used the smoke tests as our demo back to client so we had them deploy on the test web server so that run through the browser (using the selenium core). To avoid maintenance costs, the trick was to write them specific enough to test something real and general enough not to break over time. Or if they do break it doesn’t take too long to diagnose and remedy. As an aside, selenium tests when broken into smaller units use a lot of block-copy inheritance (ie copy and paste). I have found that this is just fine as long as you use find-and-replace strategies for updating data. For example, we returned to this set of tests two years later and they broke because of date specific data that had expired. After 5 minutes I had worked out that the tests with a find-and-replace for a date would fix the problem. I was surprised to tell the truth!

Then there are the acceptance tests. These are the hardest to sustain. I see them as having two main functions: they have a validation function because they link customer abstractions of workflow with code (validation). The other is that they ensure good design of the code base. A couple of widely used approaches to acceptance type tests are FIT/Fitnesse/Slim/FitLibraryWeb (or StoryTeller) and BDD-style user stories (eg StoryQ, Cucumber). Common to both is to create an additional layer of abstraction for the customer view of the system and another layer than wires this up to the system under test. Both are well documented on the web and in books so I won’t go labour the details here (just a wee note that I you can see a .net bias simply because of my work environment rather than personal preference). I wrote about acceptance tests and the need for a fluent inteface level in a earlier entry:

I find that to run maintainable acceptance tests you need to create yet another abstraction. Rarely can you just hook up the SUT api and it works. You need setup/teardown data and various helper methods. To do this, I explicitly create “profiles” in code for the setup of data and exercising of the system. For example, when I wrote a Banner delivery tool for a client (think OpenX or GoogleAds) I needed to create a “Configurator” and an “Actionator” profile. The Configurator was able to create a number banner ads into the system (eg html banner on this site, a text banner on that site) and the Actionator then invoked 10,000 users on this page on that site. In both cases, I wrote C# code to do the job (think an internal DSL as a fluent interface) rather than say in fitnesse.

I have written a series of blogs on building configurators through fluent interfaces here.

System tests may or may not have control over setup and teardown. If you do, all the better. If you don’t you’ll have to work a lot harder in the setup phase. I am currently working on a project which a system test works across three systems. The first is the main (legacy – no tests) system, the second a synching/transforming system and the third which is the target system. (Interestingly, the third only exists because no one will tackle extending the first and the second only because the third needs data from the first – it’s not that simple but you get the gist.) The system tests in this case become one less of testing that the code is correct. Rather it is more one of does the code interact with the data from the first system in ways that we would expect/accept? As a friend pointed out this becomes an issue akin to process control. Because we can’t setup data in the xUnit fashion, we need to query the first system directly and cross reference against third system. In practice, the acceptance test helps us refine our understanding of the data coming across – we find the implicit business rules and in fact data inconsistencies, or straight out errors.

Finally manual tests. When using Selenium my smoke tests are my manual tests. But in the case of web services or content gateways, there are times that I want to just be able to make one-off (but repeatable) tests that require tweaking each time. I may then be using these to inspect results too. These make little sense to invest in automation but they do make sense to have checked into source leaving it there for future reference. Some SoapUI tests would fit this category.

A final point I want to make about the nature of system tests. They are both written first and last. Like story-test driven development, I want acceptance (story tests) tests written into the code base to begin with and then set to pending (rather than failing). I then want them to be hooked up to the code as the final phase and set to passing (eg Fitnesse fixtures). By doing them last, we already have our unit and integration tests in place. In all likelihood by the time we get to the acceptance tests we are now dealing with issues of design validation in two senses. One, that we don’t understand the essential complexity of the problem space – there are things in there that we didn’t understand or know about that are now coming into light and this suggests that we will be rewriting parts of the code. Two, that the design of code isn’t quite as SOLID as we thought. For example, I was just working on an acceptance test where I had do a search for some data – the search mechanism I found had constraints living the SQL code that I couldn’t pass in and I had to have other data in the database.

For me, tests are the ability for us as coders to have conversations with our code. By the time we have had the three conversations of unit, integration and acceptance, I find that three is generally enough to have a sense that our design is good enough. The acceptance is important because helps good layering and enforce that crucial logic doesn’t live in the UI layer but rather down in the application layer (in the DDD sense). The acceptance tests, like the UI layer, are both clients of the application layer. I often find that the interface created in the application layer not surprisingly services the UI layer that will require refactoring for a second client. Very rarely is this unhelpful and often it finds that next depth of bugs.

My system tests checklist:

  • acceptance
  • smoke
  • manual


The top level system and exploratory testing is all about putting it together so that people across different roles can play with the application and use complex heuristics to check its coding. But I don’t think the top level is really about the user interface per se. It only looks that way because the GUI is most generalised abstraction that we believe that customers and testers believe that they understand the workings of the software. Working software and the GUI should not be conflated. Complexity at the top-most level is that of many dependencies interacting with each other – context is everything. Complexity here is not linear. We need automated system testing to follow critical paths that create combinations or interactions that we can prove do not have negative side effects. We also need exploratory testing which is deep, calculative yet ad hoc that attempts to create negative side effects that we can then automate. Neither strategy aspires for illusive, exhaustive testing – as JB Rainsberger argues – which is the scam of integration testing.

General ideas

Under this layering, I find a (dotnet) solution structure of three projects (.csproj) is easier to maintain and importantly the separation helps with the build server. Unit tests are built easily, run in place and fail fast (seconds not minutes). The integration tests require more work because you need the scripting to migrate up the database and seed data. These obviously take longer. I find the length of these tests and their stability are a good health indicator. If they don’t exist or are mixed up with unit tests – I worry a lot. For example, I had a project that these were mixed up and the tests took 45 minutes to run so tests were rarely run as a whole (so I found out). As it turned out many of the tests were unstable. After splitting them, we got unit tests to under 30 seconds and then started the long process of reducing integration test time down which we got down to a couple of minutes on our development machines. Doing this required splitting out another group of tests that were mixed up into the integration tests. These were in fact system tests that tested the whole of the system – that is, there are workflow tests that require all of the system to be in place – databases, data, web services, etc. These are the hardest to setup and maintain in a build environment. So we tend to do the least possible. For example, we might have selenium tests to smoke test the application through the GUI but it is usually the happy path as fast as possible that exercises that all are ducks are lined up.

Still some problems

Calling the middle layer integration testing is still problematic and can cause confusion. I introduced integration because it is about the longer running tests which required the dependency of an integration point. When explaining to people, they quickly picked up on the idea. Of course, they also bring their own understanding of integration testing which is closer to the point that Rainsberger is making.

Some general points

  • a unit test should be test first
  • an integration test with a database tests is usually the test of the repository to manage identity through the lifecycle of a domain object
  • a system test must cleanly deal with duplication (use other libraries)
  • a system test assertion is likely to be test-last
  • unit, integration and system tests should each have their own project
  • test project referencing should cascade down from system to integration through to the standalone unit
  • the number and complexity of tests across the projects should reflect the shape of the pyramid
  • the complexity of the test should be inversely proportional to the shape of the pyramid
  • unit and integration test object mothers are subtly different because of identity
  • a system test is best for testing legacy data because it requires all components in place
  • some classes/concerns might have tests split across test projects, e a repository class should have both unit and integration tests
  • framework wiring code should be pushed down to unit tests, such as, in mvc routes, actions and redirects,
  • GUI can sometime be unit tested, eg jQuery makes unit testing of browser widgets realistic
  • success of pyramid is an increase in exploratory testing (not an increase in manual regressions)
  • the development team skill level and commitment is the greatest barrier closely followed by continuous integration

Some smells

  • time taken to run tests is not inversely proportional to the shape of the pyramid
  • folder structure in the unit tests that in no way represents the layering of the application
  • only one test project in a solution
  • acceptance tests written without an abstraction layer/library


  1. The Forgotten Layer of the Test Automation Pyramid
  2. From backlog to concept
  3. Agile testing overview
  4. Standard implementation of test automation pyramid
  5. Freeman and Price, 2010, Growing Object-Oriented Software, Guided by Tests
  6. Stephens & Rosenberg, 2010, Design Driven Testing: Test Smarter, Not Harder


August 1st, 2010 No comments

Notes from a jQuery session

Structure of session:

  • Jump into an example
  • Build some new functionality
  • Come back and see the major concepts
  • Look at what is actually needed to treat javascript as a first-class citizen
  • Different testing libraries

Quick why a library in javascript?

  • cross-browser abstraction (dynduo circa 1998 was still hard!)
  • jQuery, MooTools, Extjs, Prototype, GWT, YUI, Dojo, …
  • I want to work with DOM with some UI abstractions – with a little general purpose
  • simple HTML traversing, event handling
  • also functional, inline style
  • I want plugin type architecture

Story plugin demo

  • Viewer of StoryQ results
  • StoryQ produces XML, this widget gives the XML a pretty viewer

A screenshot and demo

What would your acceptance criteria be? What do you think some of the behaviours of this page are?


eg should display the PROJECT at the top with the number of tests


think in terms of themes: data, display, events

The tests … what does the application do?

  • run the tests and see the categories
  • data loading: xml
  • display: traversing xml and creating html
  • events: click handlers

Let’s build some new functionality!

Goal: add “Expand All | Contract All | Toggle” functionality to the page


  • The user should be able to expand, collapse or toggle the tree


* should show “Expand All | Contract All | Toggle”
* should show all results when clicking expand all
* should show only top class when clicked contract all
* should toggle between all and one when clicking on toggle

Coding: Add acceptance

Add Display specs

Add Event specs

Return back to completing the Acceptance

Major aspects we covered

HTML traversing

  • I want to program akin to how I look at the page
  • I may look for: an element, a style, some content or a relationship
  • then perform an action
$(‘div > p:first’)
  • another item
  • “)

    Event handling

    • I want to look at page and add event at that point
    • I want to load data (ie xml or json)
    alert(“div clicked”)
    $(‘div’).bind(‘drag’, function(){
    $.get(‘result.xml’, function(xml){
    $(“user”, xml).each(function(){
  • “).text($(this).text() .appendTo(“#mylist”))

    Functional style

    • almost everything in jQuery are JQuery objects
    • that returns an object
    • every method can call a jQuery object
    • that means you can chain
    • plus I want it to be short code

    .addClass((idx == 4) ? ‘scenario’ : ”)
    .text($(this).attr(‘Prefix’) + ‘ ‘ + $(this).attr(‘Text’))
    .append($(‘‘).text(“a child piece of text”)
    .click(function(){ $(this).addClass(‘click’)}))

    Plugin architecture

    • drop in a widget (including my own)
    • then combine, extend
    • help understand customisation
    • basically just work

    url: ‘/update’,
    data: name,
    type: ‘put’,
    success: function(xml){
    $(‘#flash’).text(“successful update”).addClass(‘success’)

    With power and simplicity … comes responsibility

    • the need to follow conventions
      – plugins return an array
      – plugins accept parameters but have clear defaults
      – respect namespace
    • the need for structure
      – test data
      – min & pack
      – releases
    • the need to avoid mundane, time consuming tasks
      – downloading jquery latest
      – download jQuery UI
      – building and releasing packages
    • needs tests
      – I use jsspec

    Sounds like real software development?

    Treat javascript as a first-class citizen

    Give your plugin a directory structure:


    Generate your plugin boilerplate code

    jQuery Plugin Generator
    * gem install jquery-plugin-generator

    (function($) {
    $.jquery.test = {
    VERSION: “0.0.1″,
    defaults: {
    key: ‘value’

    jquery.test: function(settings) {
    settings = $.extend({}, $.jquery.test.defaults, settings);
    return this.each( function(){
    self = this;
    // your plugin


    Use a build tool to do the … ah … building

    rake acceptance # Run acceptance test in browser
    rake bundles:tm # Install TextMate bundles from SVN for jQuery and…
    rake clean # Remove any temporary products.
    rake clobber # Remove any generated file.
    rake clobber_compile # Remove compile products
    rake clobber_package # Remove package products
    rake compile # Build all the packages
    rake example # Show example
    rake first_time # First time run to demonstrate that pages are wor…
    rake jquery:add # Add latest jquery core, ui and themes to lib
    rake jquery:add_core # Add latest jQuery to library
    rake jquery:add_themes # Add all themes to libary
    rake jquery:add_ui # Add latest jQueryUI (without theme) to library
    rake jquery:add_version # Add specific version of jQuery library: see with…
    rake jquery:packages # List all packages for core and ui
    rake jquery:packages_core # List versions of released packages
    rake jquery:packages_ui # List versions of released packages
    rake jquery:versions # List all versions for core and ui
    rake jquery:versions_core # List jQuery packages available
    rake jquery:versions_ui # List jQuery UI packages available
    rake merge # Merge js files into one
    rake pack # Compress js files to min
    rake package # Build all the packages
    rake recompile # Force a rebuild of the package files
    rake repackage # Force a rebuild of the package files
    rake show # Show all browser examples and tests
    rake specs # Run spec tests in browser

    Testing … acceptance and specs

    • specs: BDD style – DOM traversing and events
    • acceptance: tasks on the GUI with the packaged version (minified or packed)
    • these both server as good documentation as well
    • plus, you have demo page baked in!

    Compiling and packaging

    • Compiling javascript … hhh? … yes, if you minify or pack your code
    • gzip compression with caching and header control is probably easier though
    • packed code is HARD to debug if it doesn’t work

    Now you are ready for some development

    Different ways to test Javascript

    Understanding JavaScript Testing

    • testing for cross-browser issues – so this is useful if you are building javascript frameworks
    • Unit – QUnit, JsUnit, FireUnit,
    • Behvaiour – Screw.Unit, JSSpec, YUITest,
    • Functional with browser launching – Selenium (IDE & RC & HQ), Watir/n, JSTestDriver, WebDriver
    • Server-side: Crosscheck, env.js, blueridge
    • Distributed: Selenium Grid, TestSwarm


    jQuery CheatSheet

    • slide 48: great explanation of DOM manipulation – append, prepend, after, bfore, wrap, replace …
  • Fluent Controller MvcContrib – Part I – Designing Controller Actions and Redirects

    March 7th, 2010 No comments

    In an earlier post/03/test-automation-pyramid-asp-net-mvc/ I said that I unit test my controllers. What I didn’t say was that I (with most of the work done by my colleague Mark and field tested by Gurpreet) had write code to make this possible. To unit test our controllers we put a layer of code that was an abstraction for “redirections”. This has turned out to be very successful for the line of business applications we write. Mark coined this a fluent controller. Out fluent controller has these benefits:

    • test-first design of controller redirections
    • also isolate that the controller makes the correct repository/service calls
    • lower the barrier to entry by developer new to MVC
    • avoids a fat controller antipattern
    • standardises flow within actions

    I also tend toward a REST design in the application so we also wanted it to live on top of the SimlyRestful contrib. You’ll find there’s an abstract class both with and without simplyRestful. I will make a quick, unsubstantiated comment. The fluent controller does not actually help you create a controller for a REST application – it is Restful but really not a full implementation of REST.

    Designing Controller Actions

    In a simplyRestful design, the flow is standardised per resource. We have out 7 actions that we decide what is going to be implemented.

    Fluent Controller Action Redirects/Renders

    How to write it test first?

    In words, I have a User Controller that displays a user and I can do the usual CRUD functions. These examples are taken from the MvcContrib source unit tests

    This test, I enter in on Index and it should just render itself and I don’t need to pass anything in:

    using MvcContrib.TestHelper.FluentController;
    public void SuccessfulIndex()
            .WhenCalling(x => x.Index());

    Got the hang of that one? Here’s some ones that redirect based on whether or not the repository/service call was successful or not. This example, imagine, you have just said Create me the user:

    public void SuccessfulCreateRedirectsToIndex_UsingRestfulAction()
            .WhenCalling(x => x.Create(null));
    public void UnsuccessfulCreateDisplaysNew_UsingString()
            .WhenCalling(x => x.Create(null));

    Here’s a test that ensures that the correct status code is returned. In this case, I will have done a GET /user and I would expect a 200 (OK) result. This is very useful if you want to play nicely with the browser – MVC doesn’t also return the status code you expect.

    public void ShowReturns200()
            .WhenCalling(x => x.Show());

    The .ShouldReturnHead was a helper method, it actual delegated by to .Should which you too can use to write your own custom test helpers:

    public void GenericShould()
            .Should(x => x.AssertResultIs<HeadResult>().StatusCode.ShouldBe(HttpStatusCode.OK))
            .WhenCalling(x => x.Show());

    Now we can start combining some tests. In this case, we want to create a new customer and if it does, render the New view and ensure that ViewData.Model has something in it (and we could check that it is customer).

    public void ModelIsPassedIntoIfSuccess()
        var customer = new Customer { FirstName = "Bob" };
            .Should(x => x.AssertResultIs<ViewResult>().ViewData.Model.ShouldNotBeNull())
            .WhenCalling(x => x.Create(customer));

    Sometimes, we only want return actions based on header location, so we can set this up first .WithLocation.

    public void HeaderSetForLocation()
            .WhenCalling(x => x.NullAction());

    There is also access to the Request through Rhino Mocks, try it like this.

    public void GenericHeaderSet()
            .WithRequest(x => x.Stub(location => location.Url).Return(new Uri("http://localhost")))
            .WhenCalling(x => x.CheckHeaderLocation());


    March 6th, 2010 1 comment

    Test Automation Pyramid in ASP.NET MVC

    This is a reposting of my comments from Mike Cohn’s Test Automation Pyramid

    I often use Mike’s Test Automation Pyramid to explain to clients’ testers and developers how to structure a test strategy. It has proved the most effective rubric (say compared with the Brian Marick’s Quadrant’s model – as further evolved from Crispin and Gregory) to get people thinking about what going on in testing the actual application and its stress points. I want to add that JB Rainsberger’s talk mentioned above is crucial to understanding why that top level set of tests can’t prove integrity of the product by itself.

    It has got me thinking that perhaps that we need to rethink some assumptions behind these labels. Partly because my code isn’t quite the same as say described here am suggesting something slightly different than say this approach The difference of opinion in this blogs also suggests this. So I thought I would spend some time talking about how I use the pyramid and then come back to rethinking its underlying assumptions.

    I have renamed some parts of the pyramid so that at a first glance it is easily recognisable by clients. This particularly renaming is in the context of writing MVC web applications. I get teams to what their pyramid looks like for their project – or what they might want it to be because it is often upside down.

    My layers:

    • System (smoke, acceptance)
    • Integration
    • Unit

    I also add a cloud on top (I think from Crispin and Gregory) for exploratory testing. This is important for two reasons: (1) I want automated testing so that I can allow more time for manual testing and to emphasise that (2) there should be no manual regression tests. This supports Rainsberger’s argument not to use the top-level testing as proof of the systems integrity – to me the proof is in the use of the system. Put alternatively, automated tests are neither automating your tester’s testing nor are they a silver bullet. So if I don’t have a cloud people forget that manual testing is part of the automated test strategy (plus with a cloud when the pyramid is inverted it makes a good picture of ice cream in a cone and you can have the image of a person licking the ice cream and it falling off ;-) .)

    In the context of an MVC application, this type pyramid has lead me to some interesting findings at the code base level. Like everyone is saying, we want to drive testing down towards the Unit tests because they are foundational, discrete and cheapest. To do this, it means that I need to create units that can be tested without boundary crossing. For an MVC (just like Rails), this means that I can unit test (with the aid of isolation frameworks):

    • models and validations (particularly using ObjectMother)
    • routes coming in
    • controller rendering of actions/views
    • controller redirection to actions/views
    • validation handling (from errors from models/repositories)
    • all my jQuery plugin code for UI-based rendering
    • any HTML generation from HtmlHelpers (although I find this of little value and brittle)
    • any of course all my business “services”

    I am always surprised at how many dependencies I can break throughout my application to make unit tests – in all of these cases I don’t not need my application to be running in a webserver (IIS or Cassini). They are quick to write, quick to fail. They also require additional code to be written or libraries to be provided (eg MvcContrib Test Helpers).

    For integration tests, I now find that the only piece of the application that I still requires a dependency is the connection to the database. Put more technically, I need to check that my repository pattern correctly manages my object’s lifecycle and its identity; it is also ensuring that I correctly code the impedance mismatch between the object layer of my domain and relational layer of the database. In practice, this is ensuring a whole load of housekeeping rather than business logic: eg my migrations scripts are in place (eg schema changes, stored procs); my mapping code (eg ORM) and that the code links all this up correctly. Interestingly, I now find that this layer in terms of lines of code is less than the pyramid suggests because there is a lot of code in a repository service that can be unit tested – it is really only the code that checks identity that requires a real database. The integration tests left tend then to map linearly to the CRUD functions. I follow the rule, one test per dependency. If my integration tests get more complicated it is often time to go looking for domain smells – in the domain driven design sense I haven’t got that bounded context right for the current state/size of the application.

    For the top layer, like others I see it as the end-to-end tests and it covers any number of dependencies to satisfy the test across scenarios.

    I have also found that there are actually different types of tests inside this layer. Because it is web application, there is the smoke test – some critical path routes that show that all the ducks are lined up – selenium, watir/n and even Steve Sanderson’s MVCIntegationTest are all fine. I might use these tests to target parts of the application that are known to be problematic so that I get as earlier a warning as possible.

    Then there are the acceptance tests. This is where I find the most value not only because it links customer abstractions of workflow with code but also as importantly because it makes me attend to code design. I find that to run maintainable acceptance tests you need to create yet another abstraction. Rarely can you just hook up the SUT api and it works. You need setup/teardown data and various helper methods. To do this, I explicitly create “profiles” in code for the setup of data and exercising of the system. For example, when I wrote a Banner delivery tool for a client (think OpenX or GoogleAds) I needed to create a “Configurator” and an “Actionator” profile. The Configurator was able to create a number banner ads into the system (eg html banner on this site, a text banner on that site) and the Actionator then invoked 10,000 users on this page on that site. In both cases, I wrote C# code to do the job (think an internal DSL as a fluent interface) rather than say in fitnesse.

    Why are these distinctions important? A few reasons. The first is that the acceptance tests in this form are a test of the design of the code rather than the function. I always have to rewrite parts of my code so that the acceptance tests can hook in. It has only ever improved my design such as separation of concerns and it often has given my greater insight into my domain model and its bounded contexts. For me, these acceptance tests are yet another conversation with my code – but by the time I have had unit, integration and acceptance test conversations about the problem the consensus decision isn’t a bad point to be at.

    Second is that I can easily leverage my DSL for performance testing. This is going help me in the non-functional testing (or the fourth quarter of the Test Quadrants model).

    Third is that this is precisely the setup you need for a client demo. So at any point, I can crank up the demo data for the demo or exploratory testing. I think it is at this point that we have a closed loop: desired function specified, code to run, and data to run against.

    Hopefully, that all makes some sense. Now back to thinking about the underlying assumptions of what is going on at each layer. I think we are still not clear on what we really testing at each layer in the pyramid: most tend to be around the physical layers, the logical layers or the roles within the team. For example, some are mapping it to the MVC particularly because the V maps closely to the UI. Others are staying in a traditional unit, functional and integration partly because the separation of roles within a team.

    I want to suggest that complexity is a better underlying organisation. Happy to leave the nomenclature alone: the bottom is where there are no dependencies (unit), the second has one dependency (integration) and top have as many as you need to make it work (system). It seems to me that the bottom two layers require you to have a very clear understanding of your physical and logical architecture expressed in terms of boxes and directed lines ensure that you test each line for every boundary.

    If you look back to my unit tests it identified logical parts of the application and tested at boundaries. Here’s one you might not expect. The UI is often seen as a low value place to test. Yet, frameworks like jQuery suggest otherwise and breakdown our layering: I can unit test a lot of the browser code which is traditionally seens as UI layer. I can widgetize any significant interactions or isolate any specific logic and unit test this outside the context of the application running (StoryQ has done this).

    The integration tests tested across a logical and often physical boundary. It has really only one dependency. Because there is one dependency the nature of complexity here is still linear. One dependency equals no interaction with other contexts.

    The top level is all about putting it together so that people across different roles can play with the application and use complex heuristics to check its coding. But I don’t think the top level is really about the user interface per se. It only looks that way because the GUI is most generalised abstraction that we believe that customers and testers believe that they understand the workings of the software. Working software and the GUI should not be conflated. Complexity at the top-most level is that of many dependencies interacting with each other – context is everything. Complexity here is not linear. We need automated system testing to follow critical paths that create combinations or interactions that we can prove do not have negative side effects. We also need exploratory testing which is deep, calculative yet ad hoc that attempts to create negative side effects that we can then automate. Neither strategy aspires for illusive, exhaustive testing – or as JB Rainsberger argues – which is the scam of integration testing.

    There’s a drawback when you interpret the pyramid along these lines. Test automation requires a high level of understanding of your solution architecture, its boundaries and interfaces, the impedance mismatches in the movement between them, and a variety of toolsets required to solve each of these problems. And I find requires a team with a code focus. Many teams and managers I work with find the hump of learning and its associated costs too high. I like the pyramid because I can slowly introduce more subtle understandings of the pyramid as the team gets more experience.


    I have just been trawling through C# DDD type books written for Microsoft focussed developers looking for the test automation pyramid. There is not one reference to this type of strategy. At best, one book .NET Domain-Driven Design with C#: Problem – Design – Solution touches on unit testings. Others mention that good design helps testability right at the end of the book (eg Wrox Professional ASP.NET Design Patterns). These are both books that are responding the Evans’ and Nilsson’s books. It is a shame really.


    March 3rd, 2010 No comments

    Validation specification testing for c# domain entities

    I have just been revisiting fluent configuration in the context of writing a fluent tester for validation of my domain entities. In that post, I wrote my test assertions as:

       new ImageBanner().SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeTrue();
       new ImageBanner { Name = "" }.SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeFalse();

    This is okay and I have had luck teaching it. But, it feels long-winded for what I really want to do:

    • check that my fake is valid
    • check validations for each property (valid and invalid)

    Today, I was looking at Fluent Nhibernate and noticed their persistence specification testing. This looks much more expressive:

    	public void CanCorrectlyMapEmployee()
    	    new PersistenceSpecification<Employee>(session)
    	        .CheckProperty(c => c.Id, 1)
    	        .CheckProperty(c => c.FirstName, "John")
    	        .CheckProperty(c => c.LastName, "Doe")

    How about this then for a base test:

    		.CheckPropertyInvalid(c => c.Name, "")

    This test makes some assumptions:

    • It will call IsValid() method on the entity
    • It allows you to pass in a value to the property, in this case an empty string
    • It will need to make assertion with your current test framework (fine, we do that in storyq)

    You can see that I prefer the static constructor rather than using new.

    There are obviously, a range of syntax changes I could make here that would mimic the validation attributes. For example:

    		.CheckPropertyMandatory(c => c.Name)
    		.CheckPropertyAlphaNumeric(c => c.Name)

    Because the .CheckProperty would be extension method, you could easily add then as you go for your validations. Let’s start with that because that’s all I need for now – we will want to be able to change the callable IsValid method. Fluent nhibernate also passes in a IEqualityComparer that makes me wonder if a mechanism like this could be useful – it certainly looks cool!

    There are still problems with this. The code is not DRY c => c.Name. This is because the test is focussed around a property rather than a mapping. So the above syntax would be useful when the test’s unit of work (business rule) is about combining properties. I think then that we would need another expression when want to express multiple states on the same property. Let’s give that a go:

    		.WithProperty(c => c.Name)

    I find the language still a little too verbose, so I might start making it smaller to:

    		.With(c => c.Name)

    I look at this code and wonder if I have a lured by the fluent syntax that doesn’t work for testing properties (rather than its original purpose to test mappings). I will have to provide an implentation of both IsAlphaNumeric() and IsMandatory(). And then I am left with the question of how to mix the positive and negative assertion on Verify().

    Here, I’m not sure that I am any better off when it comes to typing less. I am typing more and I have yet another little DSL to learn. I do think though that if I am writing business software which requires clarity in the domain model this is going to be useful. I can do a few things:

    One, I can write my domain rules test-first. Looking at the example above there are a couple of things that can help me test first. When typing the c => c.Name I am going to get IDE support – well, I am in Visual Studio with Resharper. I can type my new property and get autocompletion of the original object. Because I am going to specify the value in the context of the test eg c=> c.Name, "My new name" it is good that I don’t have to go near the original object. Furthermore, it is okay because I am not going to have the overhead of having to move back to my MotherObject class to create the data on the new object. I may do this later but will do so as a refactor. For example, I now realise that the domain object with the name “My new name” is somehow a canonical form I would create a new mother object for use in other tests eg ValidMO.WithNewName. Here’s what this verbose code looks like.

    	public static ImageBanner WithNewName {
    			var mo = ValidMO.ImageBanner; = "";
    			return mo

    Two, I can understand when I have extended my set of rules. With this approach, I have to use the abstraction layer in the fluent interface object (eg IsMandatory(), IsAlphaNumeric()). When I haven’t got an implementation then I am not going to do test first because the call simply isn’t there. I am of the opinion that this is for the best because the barrier to entry to entry of creating new validation is higher. This may seem counterintuitive. When writing business software, I always have developers with less experience (to no experience) in writing test-first, domain objects. Few of them have either used a specification pattern or validations library. I therefore need to ensure that when implementing a new validation type (or rule) that they are have intensely understood the domain and the existing validations and ensure that it does not already exist. Often rules types are there and easy to miss; other times, there is an explosion of rule types because specific uses of a generalised rule has been created as a type – so a little refactor is better. So, the benefit of having to slow and implement a new call in the fluent interface object is one that pushes us to think harder about delaying rather than rushing.

    Three, I should be able to review my tests at a different level of granularity. By granularity, I think I probably mean more a different grouping. Often on a property, there are a number of business rules in play. In the example, name let’s just imagine that I had correct named it firstName – this set of tests is about the first name by itself. There is then another rule and that is how it combines with lastName because the business rule is that the two of them make up the fullname. The next rule is to do with this combination and that is say, that the two names for some reason cannot be the same. I wouldn’t want to have that rule in the same test because that creates a test smell. Alternatively, I might have another rule about the length of the field. This rule is imposed because we are now having an interaction with a third party which requires this constraint. This would be easy then to create new rules that allows for a different grouping. I re-read this and the example seem benign!

    Let’s come back to the original syntax and compare the two and see if we have an easier test:

    new ImageBanner { Name = "" }.SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeFalse();

    Is this any better?

    		.With(c => c.Name, "")

    As I read this, I can see that everyone is going to want there own nomenclature which will lead to rolling there own set of test helpers. Clearly, both are using a functional sytle of programming that was easier to do since C# 3.5. However, the top example chains different library syntax together to make it work:

    • new ImageBanner { Name = "" } – C# 3.5 object initialiser
    • .SetupWithDefaultValuesFrom(ValidMO.ImageBanner) – custom helper
    • .IsValid() – domain object’s own self checking mechanism
    • .ShouldBeFalse();BDD helper

    The bottom example, provides an abstraction across all of these to that only uses the C# 3.5 syntax (Generics and lambas).

    • Test.ValidationSpecification<ImageBanner>(ValidMO.ImageBanner) – returns a object that contains the object setup through the validMO
    • .With(c => c.Name, "") – allows you access back to your object to override values
    • .VerifyFails() – wraps the IsValid() and ShouldBeFalse))

    In conclusion, I think it might work. Put differently, I think the effort may be worth it and pay dividends. I don’t think that it will distract from the Mother object strategy which I find invaluable in teaching people to keep data/canonical forms separate from logic and tests.

    In reflection, there is one major design implication that I like. I no longer have to call IsValid() on the domain model. I have always put this call on the domain object because I want simple access to the validator. Putting it here makes tests much easier to write because I don’t have to instantiate a ValidatorRunner. Now with Verify and VerifyFails I can delegate the runner into this area. That would be nice and clean up the domain model. However, it does mean that I have going to have to have a implementation of the runner that is available for the UI layer too. HHmmmm, on second thoughts we’ll have to see what the code looks like!