Archive

Archive for March, 2010

Fluent Configurator – thoughts on getting a design

March 14th, 2010 No comments

In a previous post series I wrote about Configuration DSLs – I’m now just looking at this from another angle of going back to some reading about the microsoft mechanism itself.

A quick note about the previous approach. The configuration is centralised into one Configuration object and then the application has this available. This is similar to Microsoft approach because we have ConfigurationManager available everywhere. The problem for me is that with ConfigurationManager the central configuration point is the app.config and that is how developers expect the application to work – working against such conventions is often creates more friction than is necessary.

Therefore, if you can’t use a (replace the) central configuration object as above, you need to have multiple xml settings. But how? Microsoft has a design time changer – the app designer. AFAIK this works when you deploy from your design environment. That is far from ideal and means that you create a build per environment. I often see this often or something that approximates this approach. But it does not lend itself to a build server. Here’s a well documented approach that can work in both design and deploy time with some adjustments as this one is design time only and we would need to implement the same code in the deploy.proj for msbuild

So the next approach which I have used is to update through xml replacements. This is easiest done replacing entire xml files which are includes into the main xml file. This can be done easily at deploy time. The problem is that all environment settings are shipped. In the situation where the package is for the one client – this isn’t actually a problem. The question is the switching mechanism to swap over xml files. There is the other issue of reconciling differences between environment settings.

This leads to a solution of having all settings in the same file. This approach is outlined in Creating Custom Configuration Section which I mentioned below. This approach could work well for the settings in your application. For example, using a proxy, I hook this up myself with my WebRequest. However, this approach wouldn’t work for log4net settings. So, we still need a better approach for specifying settings in different environments.

Where I want to head is to write a wrapper that manages writing out the xml from C# configuration. This would allow a management strategy that works with the current approach, but could also work in the first mode too of providing its own configuration references. Because settings are in C# they are hidden unless needed. Using an approach like migratordotnet we interrogate the dll for the settings and make updates at deploy time. This interrogation can take the form of dry-run, diffs, updates, backups.

Configuration is a complex and has many views. All those view combined often leads to mistakes so we need it to be as simple as possible. Very simple approaches have often resulted in cut-and-paste management – I think this is simplistic and error prone. As a developer, I understand that for my application I need a way to extend my reach into the production environment and yet give operations enough control for customisation. Because I certainly don’t want to do overnight support, nor do I want to know the credentials for production systems.

There is a final issue. How do we know what environment we are in? (and what are these called? – local, build, test, prod) Given windows conventions and mixed in with a little *nix I would go with this:

  • System environment variable – in C# accessed through a registry key and batch (eg set APP_ENV=Prod)
  • Process environment variable – eg passed in via commandline and used in say msbuild
  • machine.config
  • app/web.config – I would use this other than it has been update from one of the above (if you are using this it is likely that it was manually set)

Note: I think that for enterprise type solutions (only one production deployment) we do know about every environment that we are deploying to. If not, these ideas are not for you because you have multiple deployments (customers). To me, this is consumer-type software and requires that installers can manage themselves.

References

[1] – General about ConfigurationManager

If you want to learn about configuration then here is the key series by Jon Rista on The Code Project

[2] – Creating custom configuration sections for environment

  • example to provide environment specific settings (dev through to prod)
  • implementation is for runtime switching
  • use a custom implementation of AppSettings
  • uses the machine.config for global settings
  • an approach that requires a specific implementation and xsd updates. Good example that the current implementation is limited and customisations are complex.
  • design time checking through xsd (rather than intellisense)

[3] – using compile/deploy time switching

[4] – design time deltas with deploy time switching

[5] – poking files

*
  • “XmlMassUpdate MSBuildCommunity Tasks”:
  • “XMLpoke in nant”
  • “XMLConfig”
Categories: Uncategorized Tags:

Fluent Controller MvcContrib – Part II – Coding the Controller Actions and Redirects

March 7th, 2010 No comments

In the last entry, we created designed our controller through a test:

[Test]
public void SuccessfulIndex()
{
    GivenController.As<UserController>()
        .ShouldRenderItself(RestfulAction.Index)
        .WhenCalling(x => x.Index());
}

Now, we need to code this:

A Fluent Controlller in action

In fact, to make test run above, you need to do nothing else than inherit from the AbstractRestfulFluentController.

using MvcContrib.FluentController;

public class UserController : AbstractRestfulFluentController
{
    public ActionResult Index()
    {
        return View();
    }
}	

Let’s now look at how to create a skinny controller. Let’s take a redirection design:

GivenController.As<UserController>()
    .ShouldRedirectTo(RestfulAction.Index)
    .IfCallSucceeds()
    .WhenCalling(x => x.Create(null));

Here’s the controller code we need:

public ActionResult Create(object model)
{
    return CheckValidCall()
        .Valid(x => RedirectToAction(RestfulAction.Index))
        .Invalid(() => View("New", model));
}

A more complex example of Create is where it calls a repository. Here if the Repository throws an exception it will execute .Invalid otherwise it will execute .Valid.

public ActionResult Create(object model)
{
    return CheckValidCall(() => UserRepository.Create(model))
        .Valid(x => View(RestfulAction.New, x))
        .Invalid(() => RedirectToAction(RestfulAction.Index));
}

So far you have seen two fluent controller actions available, however, also available are:

  • InvalidNoNewErrors
  • Other

And within the context of CheckValidCall you also get access:

  • controller
  • model
  • isValid
  • newErrors

Another feature to be aware of is that you can create own extensions. The current CheckValidCall returns a FluentControllerAction<T> so all you have to do is create an extension on this as you would with an extension method. You can even rewrite the CheckValidCall and create your own one.

Fluent Controller MvcContrib – Part I – Designing Controller Actions and Redirects

March 7th, 2010 No comments

In an earlier post/03/test-automation-pyramid-asp-net-mvc/ I said that I unit test my controllers. What I didn’t say was that I (with most of the work done by my colleague Mark and field tested by Gurpreet) had write code to make this possible. To unit test our controllers we put a layer of code that was an abstraction for “redirections”. This has turned out to be very successful for the line of business applications we write. Mark coined this a fluent controller. Out fluent controller has these benefits:

  • test-first design of controller redirections
  • also isolate that the controller makes the correct repository/service calls
  • lower the barrier to entry by developer new to MVC
  • avoids a fat controller antipattern
  • standardises flow within actions

I also tend toward a REST design in the application so we also wanted it to live on top of the SimlyRestful contrib. You’ll find there’s an abstract class both with and without simplyRestful. I will make a quick, unsubstantiated comment. The fluent controller does not actually help you create a controller for a REST application – it is Restful but really not a full implementation of REST.

Designing Controller Actions

In a simplyRestful design, the flow is standardised per resource. We have out 7 actions that we decide what is going to be implemented.

Fluent Controller Action Redirects/Renders

How to write it test first?

In words, I have a User Controller that displays a user and I can do the usual CRUD functions. These examples are taken from the MvcContrib source unit tests

This test, I enter in on Index and it should just render itself and I don’t need to pass anything in:

using MvcContrib.TestHelper.FluentController;
	
[Test]
public void SuccessfulIndex()
{
    GivenController.As<UserController>()
        .ShouldRenderItself(RestfulAction.Index)
        .WhenCalling(x => x.Index());
}

Got the hang of that one? Here’s some ones that redirect based on whether or not the repository/service call was successful or not. This example, imagine, you have just said Create me the user:

[Test]
public void SuccessfulCreateRedirectsToIndex_UsingRestfulAction()
{

    GivenController.As<UserController>()
        .ShouldRedirectTo(RestfulAction.Index)
        .IfCallSucceeds()
        .WhenCalling(x => x.Create(null));
}

[Test]
public void UnsuccessfulCreateDisplaysNew_UsingString()
{
    GivenController.As<UserController>()
        .ShouldRenderView("New")
        .IfCallFails()
        .WhenCalling(x => x.Create(null));
}

Here’s a test that ensures that the correct status code is returned. In this case, I will have done a GET /user and I would expect a 200 (OK) result. This is very useful if you want to play nicely with the browser – MVC doesn’t also return the status code you expect.

[Test]
public void ShowReturns200()
{
    GivenController.As<UserController>()
        .ShouldReturnHead(HttpStatusCode.OK)
        .WhenCalling(x => x.Show());
}

The .ShouldReturnHead was a helper method, it actual delegated by to .Should which you too can use to write your own custom test helpers:

[Test]
public void GenericShould()
{
    GivenController.As<UserController>()
        .Should(x => x.AssertResultIs<HeadResult>().StatusCode.ShouldBe(HttpStatusCode.OK))
        .WhenCalling(x => x.Show());
}

Now we can start combining some tests. In this case, we want to create a new customer and if it does, render the New view and ensure that ViewData.Model has something in it (and we could check that it is customer).

[Test]
public void ModelIsPassedIntoIfSuccess()
{
    var customer = new Customer { FirstName = "Bob" };

    GivenController.As<UserController>()
        .ShouldRenderView(RestfulAction.New)
        .Should(x => x.AssertResultIs<ViewResult>().ViewData.Model.ShouldNotBeNull())
        .IfCallSucceeds(customer)
        .WhenCalling(x => x.Create(customer));

}

Sometimes, we only want return actions based on header location, so we can set this up first .WithLocation.

[Test]
public void HeaderSetForLocation()
{
    GivenController.As<UserController>()
        .WithLocation("http://localhost")
        .ShouldReturnEmpty()
        .WhenCalling(x => x.NullAction());
}

There is also access to the Request through Rhino Mocks, try it like this.

[Test]
public void GenericHeaderSet()
{
    GivenController.As<UserController>()
        .WithRequest(x => x.Stub(location => location.Url).Return(new Uri("http://localhost")))
        .ShouldReturnEmpty()
        .WhenCalling(x => x.CheckHeaderLocation());
}

test-automation-pyramid-asp-net-mvc

March 6th, 2010 1 comment

Test Automation Pyramid in ASP.NET MVC

This is a reposting of my comments from Mike Cohn’s Test Automation Pyramid

I often use Mike’s Test Automation Pyramid to explain to clients’ testers and developers how to structure a test strategy. It has proved the most effective rubric (say compared with the Brian Marick’s Quadrant’s model – as further evolved from Crispin and Gregory) to get people thinking about what going on in testing the actual application and its stress points. I want to add that JB Rainsberger’s talk mentioned above is crucial to understanding why that top level set of tests can’t prove integrity of the product by itself.

It has got me thinking that perhaps that we need to rethink some assumptions behind these labels. Partly because my code isn’t quite the same as say described here am suggesting something slightly different than say this approach The difference of opinion in this blogs also suggests this. So I thought I would spend some time talking about how I use the pyramid and then come back to rethinking its underlying assumptions.

I have renamed some parts of the pyramid so that at a first glance it is easily recognisable by clients. This particularly renaming is in the context of writing MVC web applications. I get teams to what their pyramid looks like for their project – or what they might want it to be because it is often upside down.

My layers:

  • System (smoke, acceptance)
  • Integration
  • Unit

I also add a cloud on top (I think from Crispin and Gregory) for exploratory testing. This is important for two reasons: (1) I want automated testing so that I can allow more time for manual testing and to emphasise that (2) there should be no manual regression tests. This supports Rainsberger’s argument not to use the top-level testing as proof of the systems integrity – to me the proof is in the use of the system. Put alternatively, automated tests are neither automating your tester’s testing nor are they a silver bullet. So if I don’t have a cloud people forget that manual testing is part of the automated test strategy (plus with a cloud when the pyramid is inverted it makes a good picture of ice cream in a cone and you can have the image of a person licking the ice cream and it falling off ;-) .)

In the context of an MVC application, this type pyramid has lead me to some interesting findings at the code base level. Like everyone is saying, we want to drive testing down towards the Unit tests because they are foundational, discrete and cheapest. To do this, it means that I need to create units that can be tested without boundary crossing. For an asp.net MVC (just like Rails), this means that I can unit test (with the aid of isolation frameworks):

  • models and validations (particularly using ObjectMother)
  • routes coming in
  • controller rendering of actions/views
  • controller redirection to actions/views
  • validation handling (from errors from models/repositories)
  • all my jQuery plugin code for UI-based rendering
  • any HTML generation from HtmlHelpers (although I find this of little value and brittle)
  • any of course all my business “services”

I am always surprised at how many dependencies I can break throughout my application to make unit tests – in all of these cases I don’t not need my application to be running in a webserver (IIS or Cassini). They are quick to write, quick to fail. They also require additional code to be written or libraries to be provided (eg MvcContrib Test Helpers).

For integration tests, I now find that the only piece of the application that I still requires a dependency is the connection to the database. Put more technically, I need to check that my repository pattern correctly manages my object’s lifecycle and its identity; it is also ensuring that I correctly code the impedance mismatch between the object layer of my domain and relational layer of the database. In practice, this is ensuring a whole load of housekeeping rather than business logic: eg my migrations scripts are in place (eg schema changes, stored procs); my mapping code (eg ORM) and that the code links all this up correctly. Interestingly, I now find that this layer in terms of lines of code is less than the pyramid suggests because there is a lot of code in a repository service that can be unit tested – it is really only the code that checks identity that requires a real database. The integration tests left tend then to map linearly to the CRUD functions. I follow the rule, one test per dependency. If my integration tests get more complicated it is often time to go looking for domain smells – in the domain driven design sense I haven’t got that bounded context right for the current state/size of the application.

For the top layer, like others I see it as the end-to-end tests and it covers any number of dependencies to satisfy the test across scenarios.

I have also found that there are actually different types of tests inside this layer. Because it is web application, there is the smoke test – some critical path routes that show that all the ducks are lined up – selenium, watir/n and even Steve Sanderson’s MVCIntegationTest are all fine. I might use these tests to target parts of the application that are known to be problematic so that I get as earlier a warning as possible.

Then there are the acceptance tests. This is where I find the most value not only because it links customer abstractions of workflow with code but also as importantly because it makes me attend to code design. I find that to run maintainable acceptance tests you need to create yet another abstraction. Rarely can you just hook up the SUT api and it works. You need setup/teardown data and various helper methods. To do this, I explicitly create “profiles” in code for the setup of data and exercising of the system. For example, when I wrote a Banner delivery tool for a client (think OpenX or GoogleAds) I needed to create a “Configurator” and an “Actionator” profile. The Configurator was able to create a number banner ads into the system (eg html banner on this site, a text banner on that site) and the Actionator then invoked 10,000 users on this page on that site. In both cases, I wrote C# code to do the job (think an internal DSL as a fluent interface) rather than say in fitnesse.

Why are these distinctions important? A few reasons. The first is that the acceptance tests in this form are a test of the design of the code rather than the function. I always have to rewrite parts of my code so that the acceptance tests can hook in. It has only ever improved my design such as separation of concerns and it often has given my greater insight into my domain model and its bounded contexts. For me, these acceptance tests are yet another conversation with my code – but by the time I have had unit, integration and acceptance test conversations about the problem the consensus decision isn’t a bad point to be at.

Second is that I can easily leverage my DSL for performance testing. This is going help me in the non-functional testing (or the fourth quarter of the Test Quadrants model).

Third is that this is precisely the setup you need for a client demo. So at any point, I can crank up the demo data for the demo or exploratory testing. I think it is at this point that we have a closed loop: desired function specified, code to run, and data to run against.

Hopefully, that all makes some sense. Now back to thinking about the underlying assumptions of what is going on at each layer. I think we are still not clear on what we really testing at each layer in the pyramid: most tend to be around the physical layers, the logical layers or the roles within the team. For example, some are mapping it to the MVC particularly because the V maps closely to the UI. Others are staying in a traditional unit, functional and integration partly because the separation of roles within a team.

I want to suggest that complexity is a better underlying organisation. Happy to leave the nomenclature alone: the bottom is where there are no dependencies (unit), the second has one dependency (integration) and top have as many as you need to make it work (system). It seems to me that the bottom two layers require you to have a very clear understanding of your physical and logical architecture expressed in terms of boxes and directed lines ensure that you test each line for every boundary.

If you look back to my unit tests it identified logical parts of the application and tested at boundaries. Here’s one you might not expect. The UI is often seen as a low value place to test. Yet, frameworks like jQuery suggest otherwise and breakdown our layering: I can unit test a lot of the browser code which is traditionally seens as UI layer. I can widgetize any significant interactions or isolate any specific logic and unit test this outside the context of the application running (StoryQ has done this).

The integration tests tested across a logical and often physical boundary. It has really only one dependency. Because there is one dependency the nature of complexity here is still linear. One dependency equals no interaction with other contexts.

The top level is all about putting it together so that people across different roles can play with the application and use complex heuristics to check its coding. But I don’t think the top level is really about the user interface per se. It only looks that way because the GUI is most generalised abstraction that we believe that customers and testers believe that they understand the workings of the software. Working software and the GUI should not be conflated. Complexity at the top-most level is that of many dependencies interacting with each other – context is everything. Complexity here is not linear. We need automated system testing to follow critical paths that create combinations or interactions that we can prove do not have negative side effects. We also need exploratory testing which is deep, calculative yet ad hoc that attempts to create negative side effects that we can then automate. Neither strategy aspires for illusive, exhaustive testing – or as JB Rainsberger argues – which is the scam of integration testing.

There’s a drawback when you interpret the pyramid along these lines. Test automation requires a high level of understanding of your solution architecture, its boundaries and interfaces, the impedance mismatches in the movement between them, and a variety of toolsets required to solve each of these problems. And I find requires a team with a code focus. Many teams and managers I work with find the hump of learning and its associated costs too high. I like the pyramid because I can slowly introduce more subtle understandings of the pyramid as the team gets more experience.

PostScript

I have just been trawling through C# DDD type books written for Microsoft focussed developers looking for the test automation pyramid. There is not one reference to this type of strategy. At best, one book .NET Domain-Driven Design with C#: Problem – Design – Solution touches on unit testings. Others mention that good design helps testability right at the end of the book (eg Wrox Professional ASP.NET Design Patterns). These are both books that are responding the Evans’ and Nilsson’s books. It is a shame really.

validation-specification-testing

March 3rd, 2010 No comments

Validation specification testing for c# domain entities

I have just been revisiting fluent configuration in the context of writing a fluent tester for validation of my domain entities. In that post, I wrote my test assertions as:

   ValidMO.ImageBanner.ShouldNotBeNull();
   ValidMO.ImageBanner.IsValid().ShouldBeTrue();
   new ImageBanner().SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeTrue();
   new ImageBanner { Name = "" }.SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeFalse();

This is okay and I have had luck teaching it. But, it feels long-winded for what I really want to do:

  • check that my fake is valid
  • check validations for each property (valid and invalid)

Today, I was looking at Fluent Nhibernate and noticed their persistence specification testing. This looks much more expressive:

	[Test]
	public void CanCorrectlyMapEmployee()
	{
	    new PersistenceSpecification<Employee>(session)
	        .CheckProperty(c => c.Id, 1)
	        .CheckProperty(c => c.FirstName, "John")
	        .CheckProperty(c => c.LastName, "Doe")
	        .VerifyTheMappings();
	}

How about this then for a base test:

	Test.ValidationSpecification<ImageBanner>(ValidMO.ImageBanner)
		.CheckPropertyInvalid(c => c.Name, "")
		.Verify();

This test makes some assumptions:

  • It will call IsValid() method on the entity
  • It allows you to pass in a value to the property, in this case an empty string
  • It will need to make assertion with your current test framework (fine, we do that in storyq)

You can see that I prefer the static constructor rather than using new.

There are obviously, a range of syntax changes I could make here that would mimic the validation attributes. For example:

	Test.ValidationSpecification<ImageBanner>(ValidMO.ImageBanner)
		.CheckPropertyMandatory(c => c.Name)
		.CheckPropertyAlphaNumeric(c => c.Name)
		.Verify();

Because the .CheckProperty would be extension method, you could easily add then as you go for your validations. Let’s start with that because that’s all I need for now – we will want to be able to change the callable IsValid method. Fluent nhibernate also passes in a IEqualityComparer that makes me wonder if a mechanism like this could be useful – it certainly looks cool!

There are still problems with this. The code is not DRY c => c.Name. This is because the test is focussed around a property rather than a mapping. So the above syntax would be useful when the test’s unit of work (business rule) is about combining properties. I think then that we would need another expression when want to express multiple states on the same property. Let’s give that a go:

	Test.ValidationSpecification<ImageBanner>(ValidMO.ImageBanner)
		.WithProperty(c => c.Name)
		.CheckMandatory()
		.CheckAlphaNumeric()
		.Verify();

I find the language still a little too verbose, so I might start making it smaller to:

	Test.ValidationSpecification<ImageBanner>(ValidMO.ImageBanner)
		.With(c => c.Name)
		.IsMandatory()
		.IsAlphaNumeric()
		.Verify();

I look at this code and wonder if I have a lured by the fluent syntax that doesn’t work for testing properties (rather than its original purpose to test mappings). I will have to provide an implentation of both IsAlphaNumeric() and IsMandatory(). And then I am left with the question of how to mix the positive and negative assertion on Verify().

Here, I’m not sure that I am any better off when it comes to typing less. I am typing more and I have yet another little DSL to learn. I do think though that if I am writing business software which requires clarity in the domain model this is going to be useful. I can do a few things:

One, I can write my domain rules test-first. Looking at the example above there are a couple of things that can help me test first. When typing the c => c.Name I am going to get IDE support – well, I am in Visual Studio with Resharper. I can type my new property and get autocompletion of the original object. Because I am going to specify the value in the context of the test eg c=> c.Name, "My new name" it is good that I don’t have to go near the original object. Furthermore, it is okay because I am not going to have the overhead of having to move back to my MotherObject class to create the data on the new object. I may do this later but will do so as a refactor. For example, I now realise that the domain object with the name “My new name” is somehow a canonical form I would create a new mother object for use in other tests eg ValidMO.WithNewName. Here’s what this verbose code looks like.

	public static ImageBanner WithNewName {
		get 
		{
			var mo = ValidMO.ImageBanner;
			mo.name = "";
			return mo
		}
	}

Two, I can understand when I have extended my set of rules. With this approach, I have to use the abstraction layer in the fluent interface object (eg IsMandatory(), IsAlphaNumeric()). When I haven’t got an implementation then I am not going to do test first because the call simply isn’t there. I am of the opinion that this is for the best because the barrier to entry to entry of creating new validation is higher. This may seem counterintuitive. When writing business software, I always have developers with less experience (to no experience) in writing test-first, domain objects. Few of them have either used a specification pattern or validations library. I therefore need to ensure that when implementing a new validation type (or rule) that they are have intensely understood the domain and the existing validations and ensure that it does not already exist. Often rules types are there and easy to miss; other times, there is an explosion of rule types because specific uses of a generalised rule has been created as a type – so a little refactor is better. So, the benefit of having to slow and implement a new call in the fluent interface object is one that pushes us to think harder about delaying rather than rushing.

Three, I should be able to review my tests at a different level of granularity. By granularity, I think I probably mean more a different grouping. Often on a property, there are a number of business rules in play. In the example, name let’s just imagine that I had correct named it firstName – this set of tests is about the first name by itself. There is then another rule and that is how it combines with lastName because the business rule is that the two of them make up the fullname. The next rule is to do with this combination and that is say, that the two names for some reason cannot be the same. I wouldn’t want to have that rule in the same test because that creates a test smell. Alternatively, I might have another rule about the length of the field. This rule is imposed because we are now having an interaction with a third party which requires this constraint. This would be easy then to create new rules that allows for a different grouping. I re-read this and the example seem benign!

Let’s come back to the original syntax and compare the two and see if we have an easier test:

new ImageBanner { Name = "" }.SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeFalse();

Is this any better?

	Test.ValidationSpecification<ImageBanner>(ValidMO.ImageBanner)
		.With(c => c.Name, "")
		.VerifyFails();

As I read this, I can see that everyone is going to want there own nomenclature which will lead to rolling there own set of test helpers. Clearly, both are using a functional sytle of programming that was easier to do since C# 3.5. However, the top example chains different library syntax together to make it work:

  • new ImageBanner { Name = "" } – C# 3.5 object initialiser
  • .SetupWithDefaultValuesFrom(ValidMO.ImageBanner) – custom helper
  • .IsValid() – domain object’s own self checking mechanism
  • .ShouldBeFalse();BDD helper

The bottom example, provides an abstraction across all of these to that only uses the C# 3.5 syntax (Generics and lambas).

  • Test.ValidationSpecification<ImageBanner>(ValidMO.ImageBanner) – returns a object that contains the object setup through the validMO
  • .With(c => c.Name, "") – allows you access back to your object to override values
  • .VerifyFails() – wraps the IsValid() and ShouldBeFalse))

In conclusion, I think it might work. Put differently, I think the effort may be worth it and pay dividends. I don’t think that it will distract from the Mother object strategy which I find invaluable in teaching people to keep data/canonical forms separate from logic and tests.

In reflection, there is one major design implication that I like. I no longer have to call IsValid() on the domain model. I have always put this call on the domain object because I want simple access to the validator. Putting it here makes tests much easier to write because I don’t have to instantiate a ValidatorRunner. Now with Verify and VerifyFails I can delegate the runner into this area. That would be nice and clean up the domain model. However, it does mean that I have going to have to have a implementation of the runner that is available for the UI layer too. HHmmmm, on second thoughts we’ll have to see what the code looks like!