Archive

Posts Tagged ‘XP’

validating-object-mothers

October 24th, 2010 2 comments

I was in a pairing session and we decided that that this syntax described here is way too noisy:

new Banner { Name="" }.SetupWithDefaultValuesFrom(ValidMO.Banner).IsValid().ShouldBeFalse()

So we rewrote it using anonymous objects which made it nicer to read:

a.Banner.isValid().with(new { Name="john" })

A couple of notes on this style:

  • I am using the style of lowercase for test helpers. As per above a, isValid and with.
  • I am using nested classes to create a DSL feel: eg as@a@._object_ replaces the idea of ValidOM._object_
  • I am wrapping multiple assertions in isValid and isInvalid (they are wrappers that call the domain object IsValid and provide assertions – these assertions can be overwritten for different test frameworks which at this stage is MSTest)

Unit test object builder

This code has two tests: Valid and InvalidHasNoName. The first test checks that I have setup (and maintained) the canonical model correctly and that it behaves well with the validator. The second test demonstrates testing invalid by exception. Here there rule is that the banner is invalid if it has no name.

  using Domain.Model;
  using Microsoft.VisualStudio.TestTools.UnitTesting;

  namespace UnitTests.Domain.Model
  {
      [TestClass]
      public class BannerPropertyValidationTest
      {
          [TestMethod]
          public void Valid()
          {
              a.Banner.isValid();
          }

          [TestMethod]
          public void InvalidHasNoName()
          {
              a.Banner.isInvalid().with(new {Name = ""});
          }
      } 
  }

Here’s the canonical form that gives us the object as a.Banner:

  namespace UnitTests.Domain
  {
      public static class a
      {
          public static Banner Banner
          {
              get  { return new Banner  { Name = "Saver" }; }
          }
      }
  }

At this stage, you should have a reasonable grasp of the object through the tests and what we think is exemplar scenario data. Here’s the model with validations. I won’t show but there is an extension methods

  using Castle.Components.Validator;

  namespace Domain.Model
  {
      public class Banner
      {
        [ValidateNonEmpty]
        public virtual string Name { get; set; }
      }
  }

Builder extensions: with

Now for the extensions that hold this code together. It is a simple piece of code that takes the model and anonymous class and merges them. As a side note I originally thought that I would use LinFu to do this but it can’t duck type against concrete classes, only interfaces. And I don’t have interfaces on a domain model.

	using System;

	namespace Domain
	{
	    public static class BuilderExtensions
	    {
	        public static T with<T>(this T model, object anon) where T : class
	        {
	            foreach (var anonProp in anon.GetType().GetProperties())
	            {
	                var modelProp = model.GetType().GetProperty(anonProp.Name);
	                if (modelProp != null)
	                {
	                    modelProp.SetValue(model, anonProp.GetValue(anon, null), null);
	                }
	            }
	            return model;
	        }
	    }
	}

So if you want to understand how the different scenarios it works on, here’s the tests:

	using Domain;
	using Domain.Model;
	using Microsoft.VisualStudio.TestTools.UnitTesting;

	namespace Tests.Unit
	{
	    [TestClass]
	    public class WithTest
	    {
	        [TestMethod]
	        public void UpdateBannerWithName_ChangesName()
	        {
	            var a = new Banner { Name = "john" };
	            var b = a.with(new { Name = "12345" });
	            Assert.AreEqual(b.Name, "12345");
	        }

	        [TestMethod]
	        public void UpdateBannerWithEmptyName_ChangesName()
	        {
	            var a = new Banner { Name = "john" };
	            var b = a.with(new { Name = "" });
	            Assert.AreEqual(b.Name, "");
	        }

	        [TestMethod]
	        public void UpdateBannerWithEmptyName_ChangesNameAsRef()
	        {
	            var a = new Banner { Name = "john" };
	            a.with(new { Name = "" });
	            Assert.AreEqual(a.Name, "");
	        }

	        [TestMethod]
	        public void UpdateBannerChainedWith_ChangesNameAndDescriptionAsRef()
	        {
	            var a = new Banner { Name = "john" };
	            a.with(new { Name = "" }).with(new { Description = "hi" });
	            Assert.AreEqual(a.Name, "");
	            Assert.AreEqual(a.Description, "hi");
	        }

	        [TestMethod]
	        public void UpdateBannerWithName_ChangesNameOnly()
	        {
	            var a = new Banner { Name = "john", Description = "ab" };
	            var b = a.with(new { Name = "12345" });
	            Assert.AreEqual(b.Name, "12345");
	            Assert.AreEqual(b.Description, "ab");
	        }

	        [TestMethod]
	        public void UpdateBannerWithPropertyDoesntExist_IsIgnored()
	        {
	            var a = new Banner { Name = "john" };
	            var b = a.with(new { John = "12345" });
	            // nothing happens!
	        }
	    }
	}

Builder extensions: validators

Now that we have the builder in place, we want to be able to do assertions on the new values in the object. Here’s what we are looking for a.Banner.isValid().with(new{Name="john"}). The complexity here is that we want to write the isValid or isInvalid before the with. We felt that it read better. This adds a little complexity to the code – but not much.

The general structure below goes something like this:

  1. add an extension method for the isValid on a domain object
  2. that helper returns a ValidationContext in the form of a concrete Validator with a test assertion
  3. we need to create another with on a ValidationContext to run the Validator
  4. Finally, in the with we chain the with with the anonymous class and do the assert
	using Domain;
	using Domain.Model;
	using Microsoft.VisualStudio.TestTools.UnitTesting;

	namespace Tests.Unit
	{
	    public static class ValidationContextExtensions
	    {
	        public static ValidationContext<T> isInvalid<T>(this T model)
	        {
	            return new Validator<T>(model, (a) => Assert.IsFalse(a));
	        }

	        public static ValidationContext<T> isValid<T>(this T model)
	        {
	            return new Validator<T>(model, (a) => Assert.IsTrue(a));
	        }

	        public static T with<T>(this ValidationContext<T> model, object exceptions) where T : class
	        {
	            model.Test.with(exceptions);
	            model.Assert(model.Test.IsValid());
	            return model.Test;
	        }
	    }

	    public interface ValidationContext<T>
	    {
	        T Test { get;  set; }
	        Action<bool> Assertion { get;  set; }
	        void Assert(bool isValid);
	    }

	    public class Validator<T> : ValidationContext<T>
	    {
	        public Validator(T test, Action<bool> assertion) {
	            Test = test;
	            Assertion = assertion;
	        }
	        public T Test { get; set; }
	        public virtual Action<bool> Assertion { get; set; }
	        public void Assert(bool value)
	        {
	            Assertion.Invoke(value);
	        }
	    }
	}

Just a last comment about the interface and implementation. I have gone for naming the interface differently to the interface (ie not IValidator) because I want to avoid using the I@ convention and see where it takes me. In this case, the @with needs to be able to chain itself to something – to me this is the validation context. This context then has a validator in it (eg valid/invalid). In this case the interface isn’t merely a contract it is actually being used concretely itself without an implementation. In fact, we could almost have the interface living without its properties and methods definitions at this stage, or perhaps the interface could have become an abstract class – either solution would entail less code (but slightly weird conventions).

Righto, a bit more showing of tests last:

	using Domain.Model;
	using Microsoft.VisualStudio.TestTools.UnitTesting;

	namespace Tests.Unit
	{

	    [TestClass]
	    public class ValidationContextTest
	    {
	        private const string NameCorrectLength = "johnmore_than_six";
	        private const string NameTooShort = "john";

	        [TestMethod]
	        public void UpdateBannerWithInvalidExtensionOnWith()
	        {
	            var a = new Banner { Name = NameTooShort };
	            a.isInvalid().with(new { Name = "" });
	        }

	        [TestMethod]
	        public void UpdateBannerWithValidExtensionOnWith()
	        {
	            var a = new Banner { Name = NameTooShort };
	            a.isValid().with(new { Name = NameCorrectLength });
	        }

	        [TestMethod]
	        public void NewBannerIsValidWithoutUsingWith()
	        {
	            var a = new Banner { Name = NameTooShort };
	            a.isValid();
	        }

	        [TestMethod]
	        public void NewBannerIsInvalidWithoutUsingWith()
	        {
	            var a = new Banner { Name = NameCorrectLength };
	            a.isInvalid();
	        }
	    }
	}

Just a final bit, if you are wondering between IsValid with caps and isValid. Here’s validator running code on the domain model that we are wrapping in our validators.

	using Castle.Components.Validator;

	namespace Domain.Model
	{
	    public static class ValidationExtension
	    {
	        private static readonly CachedValidationRegistry Registry = new CachedValidationRegistry();

	        public static bool IsValid<T>(this T model) where T : class 
	        {
	            return new ValidatorRunner(Registry).IsValid(model);
	        }
	    }
	}

I hope that helps.

Object Mothers as Fakes

July 30th, 2009 No comments

Update: I was in a pairing session and we decided that that this syntax is way too noisy. So we have written it here. Effectively to a.Banner.isValid().with(new{Name="john"})

This is a follow to Fakes Arent Object Mothers Or Test Builders. Over the last few weeks I have had the chance to reengage with MO and fakes. I am going to document a helper or two with building fakes using extensions. But before that I want to make an observation or two:

  • I have been naming my mother objects as ValidMO.xxxxx and making sure they are static
  • I find that I have a ValidMO per project and that is better than a shared one. For example, in my unit tests the validMO tends to need Ids populated and that is really good as I explore the domain and have to build up the object graph. In my integration tests, however, Id shouldn’t be populated because that comes from the database and of course changes over time. My MOs in the regression and acceptance test are different again. I wouldn’t have expected this. It is great because I don’t have to spawn yet another project just to hold the MO data as I have in previous projects.
  • I find that I only need one MO per object
  • Needing another valid example suggests in fact a new type of domain object
  • I don’t need to have invalid MOs (I used to)

Now for some code.

When building objects for test, the pattern is to either:

  • provide the MO
  • build the exceptions in a new object and fill out the rest with the MO
  • not use MO at all (very rarely)
  • I use this pattern to build objects with basic objects and not Lists – you’ll need to extend the tests/code for that

To do this I primarily use a builder as an extension. This extension differs between unit and integration tests.

Unit test object builder

This code has two tests: Valid and Invalid. The tests demonstrate a couple of things. The first of which is that I am bad by having three tests in the first test. Let’s just say that’s for readability ;-) The first set of tests check that I have setup the ValidMO correctly and that it behaves well with the extension helper “SetupWithDefaultValuesFrom”. This series of tests has been really important in the development of the extensions and quickly point to problems – and believe me this approach hits limits quickly. Nonetheless it is great for most POCOs.

The second test demonstrates testing invalid by exception. Here there rule is that the image banner is invalid if it has no name. So the code says, make the name empty and populate the rest of the model.

  using Core.Domain.Model;
  using Contrib.TestHelper;
  using Microsoft.VisualStudio.TestTools.UnitTesting;
  using UnitTests.Core.Domain.MotherObjects;
  using NBehave.Spec.MSTest;

  namespace UnitTests.Core.Domain.Model
  {
      /// <summary>
      /// Summary description for Image Banner Property Validation Test
      /// </summary>
      [TestClass]
      public class ImageBannerPropertyValidationTest
      {
          [TestMethod]
          public void Valid()
          {
              ValidMO.ImageBanner.ShouldNotBeNull();
              ValidMO.ImageBanner.IsValid().ShouldBeTrue();
              new ImageBanner().SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeTrue();
          }

          [TestMethod]
          public void InvalidHasNoName()
          {
              new ImageBanner { Name = "" }.SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeFalse();
          }
      } 
  }

Here’s the ValidMO.

  namespace UnitTests.Core.Domain.MotherObjects
  {
      public static class ValidMO
      {
          public static IBanner ImageBanner
          {
              get
              {
                  return new ImageBanner
                             {
                                 Id = 1,
                                 Name = "KiwiSaver",
                                 Url = "http://localhost/repos/first-image.png",
                                 Destination = "http://going-to-after-click.com",
                                 Description = "Kiwisaver Banner for latest Govt initiative",
                                 IsActive = true,
                                 IsDeleted = false
                             };
              }
          }
      }
  }

At this stage, you should have a reasonable grasp of the object through the tests and what we think is exemplar scenario data. Here’s the model if you insist.

  using Castle.Components.Validator;

  namespace Core.Domain.Model
  {
      public class ImageBanner : Banner
      {
          [ValidateNonEmpty, ValidateLength(0, 200)]
          public virtual string Url { get; set; }

          [ValidateNonEmpty, ValidateLength(0, 200)]
          public string Destination { get; set; }
      }
  }

Builder extensions

Now for the extensions that hold this code together. It is a simple piece of code that takes the model and OM and merges them. Before we go too far. It is actually little more ugly than I want it to be and haven’t had time to think of a more elegant solution. There are actually two methods. One that overrides the model based on null/empty properties and one that also see zero ints and empty strings also as null/empty. So generally, you need to two.

  using System;
  using System.Linq;
  using Contrib.Utility;

  namespace Contrib.TestHelper
  {
      /// <summary>
      /// Test helpers on Mother objects
      /// </summary>
      public static class MotherObjectExtensions
      {

          /// <summary>
          /// Setups the specified model ready for test by merging a fake model with the model at hand. The code merges
          /// the properties of the given model with any defaults from the fake. If the value of a property on the model is an int and its value is 0 
          /// it is treated as null and the fake is used instead.
          /// overridden 
          /// </summary>
          /// <typeparam name="T"></typeparam>
          /// <param name="model">The model.</param>
          /// <param name="fake">The fake.</param>
          /// <returns>the model</returns>
          public static T SetupWithDefaultValuesFrom<T>(this T model, T fake)
          {
              var props = from prop in model.GetType().GetProperties() // select model properities to populate the fake because the fake is the actual base
                          where prop.CanWrite
                          && prop.GetValue(model, null) != null
                          && (
                              ((prop.PropertyType == typeof(int) || prop.PropertyType == typeof(int?)) && prop.GetValue(model, null).As<int>() != 0)
                           || ((prop.PropertyType == typeof(long) || prop.PropertyType == typeof(long?)) && prop.GetValue(model, null).As<long>() != 0)
                           || (prop.PropertyType == typeof(string) && prop.GetValue(model, null).As<string>() != String.Empty)
                              )
                          select prop;

              foreach (var prop in props)
                  prop.SetValue(fake, prop.GetValue(model, null), null); //override the fake with model values

              return fake;
          }
   
          /// <summary>
          /// Setups the specified model ready for test by merging a fake model with the model at hand. The code merges
          /// the properties of the given model with any defaults from the fake. This method is the same as <see cref="SetupWithDefaultValuesFrom{T}"/>
          /// except that empty strings or 0 int/long are able to be part of the setup model
          /// overridden 
          /// </summary>
          /// <typeparam name="T"></typeparam>
          /// <param name="model">The model.</param>
          /// <param name="fake">The fake.</param>
          /// <returns>the model</returns>
          public static T SetupWithDefaultValuesFromAllowEmpty<T>(this T model, T fake)
          {
              var props = from prop in model.GetType().GetProperties() // select model properities to populate the fake because the fake is the actual base
                          where prop.CanWrite
                          && prop.GetValue(model, null) != null
                          select prop;

              foreach (var prop in props)
                  prop.SetValue(fake, prop.GetValue(model, null), null); //override the fake with model values

              return fake;
          }
      }
  }

oh, and I just noticed there is another little helper in there too. It is the object.As helper that casts my results in this case as a string. I will include it and thank Rob for the original code and Mark for an update – and probably our employer for sponsoring our after-hours work ;-) :

    using System;
    using System.IO;


    namespace Contrib.Utility
    {
        /// <summary>
        /// Class used for type conversion related extension methods
        /// </summary>
        public static class ConversionExtensions
        {
            public static T As<T>(this object obj) where T : IConvertible
            {
                return obj.As<T>(default(T));
            }

            public static T As<T>(this object obj, T defaultValue) where T : IConvertible
            {
                try
                {
                    string s = obj == null ? null : obj.ToString();
                    if (s != null)
                    {
                        Type type = typeof(T);
                        bool isEnum = typeof(Enum).IsAssignableFrom(type);
                        return (T)(isEnum ?
                            Enum.Parse(type, s, true)
                            : Convert.ChangeType(s, type));
                    }
                }
                catch
                {
                }
                return defaultValue; 
            }

            public static T? AsNullable<T>(this object obj) where T : struct, IConvertible
            {
                try
                {
                    string s = obj as string;
                    if (s != null)
                    {
                        Type type = typeof(T);
                        bool isEnum = typeof(Enum).IsAssignableFrom(type);
                        return (T)(isEnum ?
                            Enum.Parse(type, s, true)
                            : Convert.ChangeType(s, type));
                    }
                }
                catch
                {

                }
                return null;
            }

            public static byte[] ToBytes(this Stream stream)
            {
                int capacity = stream.CanSeek ? (int)stream.Length : 0;
                using (MemoryStream output = new MemoryStream(capacity))
                {
                    int readLength;
                    byte[] buffer = new byte[4096];

                    do
                    {
                        readLength = stream.Read(buffer, 0, buffer.Length);
                        output.Write(buffer, 0, readLength);
                    }
                    while (readLength != 0);

                    return output.ToArray();
                }
            }

            public static string ToUTF8String(this byte[] bytes)
            {
                if (bytes == null)
                    return null;
                else if (bytes.Length == 0)
                    return string.Empty;
                var str = System.Text.Encoding.UTF8.GetString(bytes);

                // If the string begins with the byte order mark
                if (str[0] == '\xFEFF')
                    return str.Substring(1);
                else
                {
                    return str;
                }
            }
        }
    }
    

Finally, if actually do use this code here are tests:

  using Contrib.TestHelper;
  using Microsoft.VisualStudio.TestTools.UnitTesting;
  using NBehave.Spec.MSTest;

  namespace Contrib.Tests.TestHelper
  {
      /// <summary>
      /// Summary description for MotherObject Extensions Test
      /// </summary>
      [TestClass]
      public class MotherObjectExtensionsTest
      {

          [TestMethod]
          public void EmptyString()
          {
              var test = new Test { }.SetupWithDefaultValuesFrom(new Test { EmptyString = "this" });
              test.EmptyString.ShouldEqual("this");
          }
          [TestMethod]
          public void ModelStringIsUsed()
          {
              var test = new Test { EmptyString = "that" }.SetupWithDefaultValuesFrom(new Test { EmptyString = "this" });
              test.EmptyString.ShouldEqual("that");
          }
          [TestMethod]
          public void EmptyStringIsAccepted()
          {
              var test = new Test { EmptyString = "" }.SetupWithDefaultValuesFromAllowEmpty(new Test { EmptyString = "this" });
              test.EmptyString.ShouldEqual("");
          }

          [TestMethod]
          public void ZeroInt()
          {
              var test = new Test { }.SetupWithDefaultValuesFrom(new Test { ZeroInt = 1 });
              test.ZeroInt.ShouldEqual(1);
          }
          [TestMethod]
          public void ModelIntIsUsed()
          {
              var test = new Test { ZeroInt = 2 }.SetupWithDefaultValuesFrom(new Test { ZeroInt = 1 });
              test.ZeroInt.ShouldEqual(2);
          }
          [TestMethod]
          public void ZeroIntIsAccpted()
          {
              var test = new Test { ZeroInt = 0}.SetupWithDefaultValuesFromAllowEmpty(new Test { ZeroInt = 1 });
              test.ZeroInt.ShouldEqual(0);
          }

          [TestMethod]
          public void ZeroLong()
          {
              var test = new Test { }.SetupWithDefaultValuesFrom(new Test { ZeroLong = 1 });
              test.ZeroLong.ShouldEqual(1);
          }
          [TestMethod]
          public void ModelLongIsUsed()
          {
              var test = new Test { ZeroLong = 2 }.SetupWithDefaultValuesFrom(new Test { ZeroLong = 1 });
              test.ZeroLong.ShouldEqual(2);
          }
          [TestMethod]
          public void ZeroLongIsAccepted()
          {
              var test = new Test {ZeroLong = 0}.SetupWithDefaultValuesFromAllowEmpty(new Test { ZeroLong = 1 });
              test.ZeroLong.ShouldEqual(0);
          }

          private class Test
          {
              public string EmptyString { get; set; }
              public int ZeroInt { get; set; }
              public long ZeroLong { get; set; }
          }
      }
  }

I hope that helps.

Fakes Arent Object Mothers Or Test Builders

November 20th, 2008 No comments

Fakes are not Object Mothers or Test Data Builders and Extensions are a bridge to aid BDD through StoryQ

I’ve just spent the week at Agile2008 and have had it pointed out to me that I have been using Fakes as nomenclature when in fact it is an Object Mother pattern. So I have set off to learn about the Object Mother pattern and in doing so came across the Test Data Builder pattern. But because I am currently working in C# 3.5 and using PI/POCOs/DDD, the builder pattern at looking a little heavy and the criticism of Object Mother holds that it ends up as a God object and hence a little too cluttered.

I’ve also found utilising Extensions in C# 3.5 as a good way to keep the OM clean. By adding extensions, it cleans up the code significantly such that BDD through StoryQ became attractive. Extensions have these advantages:

  • Allows you to put a SetUp (Build/Contruct) method on the object itself
  • Makes this/these methods only available within the test project only
  • Keeps data/values separate from setup ie as a cross-cutting concern
  • In tests, specific edge data – or basic tests – aren’t put into object mothers
  • Extensions end up concise and DRY code

Background

I have been using a Fake models as a strategy to unit test (based on Jimmy Nillson’s DDD in C# book). Technically, these are not fakes. According to sites, fakes are a strategy to replace service objects rather than value objects. My use of fakes is rather the “Object Mother” pattern. (Am I the only person six years behind? Probably.) Since knowing about this pattern, I’ve found it on Fowler’s bliki and the original description at XP Universe 2001 (however, I haven’t been able to download the paper as yet – xpuniverse.com isn’t responding).

Having read entries around this pattern, another pattern emerged: the “Test Data Builder” pattern. It is a likable pattern. (In fact, it could be leveraged as an internal DSL - but I haven’t pursued that as yet.) But, given the work I do in C#, it looks a little heavy as it is useful to cope with complex dependencies. In contrast, the fake/object mother I have been using has been really easily to teach, and well liked by, developers.

Basic approach

To test these approaches, I am going to create a simple model: User which has an Address. Both models have validations on the fields and I can ask the model to validate itself. You’ll note that I am using Castle’s validations component. This breaks POCO rules of a single library but in practice is a good trade-off. My basic test heuristic:

  • populate with defaults = valid
  • test each field with an invalid value
  • ensure that inherited objects are also validated

Four Strategies:

  1. Object Mother
  2. Test Data Builder
  3. Extensions with Object Mother
  4. StoryQ with Extensions (and Object Mother)

Strategy One: Object Mother

First, the test loads up a valid user through a static method (or property if you wish). I use the convention “Valid” to always provide back what is always a valid model. This is a good way to demonstrate to new devs what the exemplar object looks like. Interesting, in C# 3.5, the new constructor convention works very well here. You can read down the list of properties easily, often without need to look at the original model code. Moreover, in the original object, there is no need for an overloaded constructor.

[Test]
 public void ValidUser()
 {
     var fake = TestUser.Valid();
     Assert.IsTrue(fake.IsOKToAccept());
 }

  public class TestUser
   {
       public static User Valid()
       {
           return new User {
               Name = "Hone", 
               Email = "Hone@somewhere.com", 
               Password="Hone", 
               Confirm = "Hone"
       }
   }

Oh, and here’s the User model if you are interested. Goto to Castle Validations if this isn’t clear.

 public class User
 {
     [ValidateLength(1, 4)]
     public string Name { get; set; }

     [ValidateEmail]
     public string Email { get; set; }

     [ValidateSameAs("Confirm")]
     [ValidateLength(1,9)]
     public string Password { get; set; }
     public string Confirm { get; set; }

     public bool IsOKToAccept()
     {
         ValidatorRunner runner = new ValidatorRunner(new CachedValidationRegistry());
         return runner.IsValid(this);
     }
  }

Second, the test works through confirming that each validation works. Here we need to work through each of the fields checking for invalid values. With the object mother pattern, each case is a separate static method: email, name and password. The strategy is to always always start with the valid model, make only one change and test. It’s a simple strategy, easy to read and avoids duplication. The problem with this approach is hiding test data; we have to trust that the method, “invalidName”, is actually an invalid name or follow it through to the static method. In practice, it works well enough and avoids duplication.

public static User InvalidEmail()
{
    var fake = Valid();
    fake.Email = "invalid@@some.com";
    return fake;
}

public static User InvalidName()
{
    var fake = Valid();
    fake.Name = "with_more_than_four";
    return fake;
}

public static User InvalidPassword()
{
    var fake = Valid();
    fake.Password = "not_same";
    fake.Confirm = "different";
    return fake;
}

[Test]
public void InValidEmail()
{
    var fake = TestUser.InvalidEmail();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidName()
{
    var fake = TestUser.InvalidName();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidPassword()
{
    var fake = TestUser.InvalidPassword();
    Assert.IsFalse(fake.IsOKToAccept());
}

Strategy Two: Test Data Builder

I’m not going to spend long on this piece of code because the builder looks like too much work. The pattern is well documented so I assume you already understand it. I do think that it could be useful if you want to create an internal DSL to work through dependencies. Put differently, this example is too simple to demonstrate a case for when to use this pattern (IMHO).

First, here’s the test for the valid user. I found that I injected the valid user behind the scenes (with an object mother) so that I could have a fluent interface with Build(). I can overload the constructor to explicitly inject a user too. I have two options when passing in an object: (1) inject my object mother or (2) inject a locally constructed object. The local construction is useful for explicitly seeing what is being tested. But really, it is the syntactic sugar of C# that is giving visibility rather than the pattern; so, for simple cases, the syntax of the language renders the pattern more verbose.

[Test]
public void ValidUser()
{
    var fake = new UserBuilder().Build();
    Assert.IsTrue(fake.IsOKToAccept());

}

[Test]
public void LocalUser()
{
    var fake = new UserBuilder(TestUser.Valid()).Build();
    Assert.IsTrue(fake.IsOKToAccept());

    fake = new UserBuilder(new User
                    {
                        Name = "Hone", 
                        Email = "good@com.com",
                        Password = "password",
                        Confirm = "password",
                        Address = new Address
                                      {
                                          Street = "Fred",
                                          Number = "19"
                                      }
                    })
                    .Build();
    Assert.IsTrue(fake.IsOKToAccept());
}

public class UserBuilder
{
    private readonly User user;

    public UserBuilder()
    {
        user = TestUser.Valid();
    }

    public UserBuilder(User user)
    {
        this.user = user;
    }

    public User Build() { return user; }
}

Second, let’s validate each field. In the code, and on the positive side, it is clear what constitutes an invalid value and it caters for dependencies such as in “withPassword” between password and confirm. But, really, there is just too much typing; I have to create a method for every field and dependency. For simple models, I am not going to do this. For complex or large models, it would take ages.

[Test]
public void InValidEmail()
{
    var fake = new UserBuilder()
        .withEmail("incorect@@@emai.comc.com.com")
        .Build();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidName()
{
    var fake = new UserBuilder()
        .withName("a_name_longer_than_four_characters")
        .Build();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidPassword()
{
    var fake = new UserBuilder()
        .withPassword("bad_password")
        .Build();
    Assert.IsFalse(fake.IsOKToAccept());
}

public class UserBuilder
{
    private readonly User user;

    public UserBuilder()
    {
        user = TestUser.Valid();
    }

    public UserBuilder(User user)
    {
        this.user = user;
    }

    public UserBuilder withEmail(string email)
    {
        user.Email = email;
        return this;
    }

    public UserBuilder withPassword(string password)
    {
        user.Confirm = password;
        user.Password = password;
        return this;
    }

    public UserBuilder withName(string name)
    {
        user.Name = name;
        return this;
    }

    public UserBuilder withAddress(Address address)
    {
        user.Address = address;
        return this;
    }

    public User Build() { return user; }
}

Strategy Three: Object Mother with Extensions

Having tried the two previous patterns, I now turn to what extensions have to offer me. (Extensions are something I have been meaning to try for a while.) As it turns out extensions combined with the new constructors allow for SOC and DRY and also allow us to separate valid and invalid data in tests. There is a downside of course. It requires some (reflection) code to make it play nicely. More code – a slippery slope some might say …

First, I have added a new method to my User model in the form of an extension. I have named it SetUp so that it translates into the setup and teardown (init and dispose) phases of unit testing. I could use Build or Construct instead. This method returns my object mother. I still like to keep my object mother data separate because I think of construction and data as separate concerns.

[Test]
 public void ValidUser()
 {
     var fake = new User().SetUp();
     Assert.IsTrue(fake.IsOKToAccept());
 }

 public static User SetUp(this User user)
 {
     User fake = TestUser.Valid();
     return fake; 
 }

I also want to test how to create a version of the user visible from the test code. This is where more code is required to combine the any provided fields from the test with the default, valid model. In returning hydrated test user, this method accepts your partially hydrated object and then adds defaults. The goal is that your unit test code only provides specifics for the test. The rest of the valid fields are opaque to your test; the code reflects on your model only hydrating empty fields so that validations do not fail. This reflection code is used by all SetUp methods across models.

[Test]
 public void LocalUser()
 {
     var fake = new User { Name = "John", Email = "valid@someone.com" }.SetUp();
     Assert.IsTrue(fake.IsOKToAccept());
 }

 public static User SetUp(this User user)
 {
     User fake = TestUser.Valid();
     SyncPropertiesWithDefaults(user, fake);
     return fake; 
 }

 private static void SyncPropertiesWithDefaults(object obj, object @base)
   {
       foreach (PropertyInfo prop in obj.GetType().GetProperties())
       {
           object[] val = new object[1];
           val[0] = obj.GetType().GetProperty(prop.Name).GetValue(obj, null);
           if (val[0] != null)
           {
               obj.GetType().InvokeMember(prop.Name, BindingFlags.SetProperty, Type.DefaultBinder, @base, val);
           }
       }
   }

Second, let’s again look at the code to validate all the fields. Now, note at this point there no other object mothers. I see this strategy says, use object mothers to model significant data and avoid cluttering these with edge cases. The result is to hand in the field/value to isolate and then test. I find this readable and it addresses the concern of not making visible the edge case (invalid) data.

[Test]
public void InValidEmail()
{
    var fake = new User { Email = "BAD@@someone.com" }.SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidName()
{
    var fake = new User { Name = "too_long_a_name" }.SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidPassword()
{
    var fake = new User { Password = "password_one", Confirm = "password_two" }.SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

Using extensions combined with constructors are nice. We can also format the page to make it look like a fluent interface if need. For example:

[Test]
public void InValidPassword()
{
    var fake = new User 
                { 
                    Password = "password_one", 
                    Confirm = "password_two" 
                }
                .SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

Strategy Four: BDD and StoryQ

Having worked out that using extensions reduce repetitious code, I still think there is a smell here. Are those edge cases going to add value in the current form? They really are bland. Sure, the code is tested. But, really, who wants to read and review those tests? I certainly didn’t use them to develop the model code; they merely assert overtime that my assumptions haven’t changed. Let’s look at how I would have written the same tests using BDD and in particularly StoryQ. Here’s a potential usage of my User model.

Story: Creating and maintaining users

  As an user
  I want to have an account
  So that I can register, login and return to the site

  Scenario 1: Registration Page
    Given I enter my name, address, email and password     
    When username isn't longer than 4 characters           
      And email is validate                                
      And password is correct length and matches confirm   
      And address is valid                                 
    Then I can log now login
      And I am sent a confirmation email

Leaving aside any problems in the workflow (which there are), the story works with the model in context. Here is the code that generates this story:

[Test]
public void Users()
{
    Story story = new Story("Creating and maintaining users");

    story.AsA("user")
        .IWant("to have an account")
        .SoThat("I can register, login and return to the site");

    story.WithScenario("Registration Page")
        .Given(Narrative.Exec("I enter my name, address, email and password", 
                 () => Assert.IsTrue(new User().SetUp().IsOKToAccept())))
        
        .When(Narrative.Exec("username isn't longer than 4 characters", 
                 () => Assert.IsFalse(new User{Name = "Too_long"}.SetUp().IsOKToAccept())))

        .And(Narrative.Exec("email is validate", 
                 () => Assert.IsFalse(new User { Email = "bad_email" }.SetUp().IsOKToAccept())))

        .And(Narrative.Exec("password is correct length and matches confirm", 
                 () => Assert.IsFalse(new User { Password = "one_version", Confirm = "differnt"}.SetUp().IsOKToAccept())))

        .And(Narrative.Exec("address is valid", 
                  () => Assert.IsFalse(new User { Address = new Address{Street = null}}.SetUp().IsOKToAccept())))
        
        .Then(Narrative.Text("I can log now login"))
        .And(Narrative.Text("I am sent a confirmation email"));

    story.Assert();
}

Working with BDD around domain models and validations makes sense to me. I think that it is a good way to report validations back up to the client and the team. There is also a good delineation between the “valid” model (“Given” section) and “invalid” (“When” section) edge cases in the structure. In this case, it also demonstrates that those models so far do nothing because there are no tests in the “Then” section.

Some quick conclusions

  1. Syntactic sugar of new constructor methods in C# 3.5 avoids the need for “with” methods found the Test Data Builder pattern
  2. Extensions may better replace the Build method in the builder
  3. Object mothers are still helpful to retain separation of concerns (eg data from construction)
  4. Go BDD ;-)

Well that’s about it for now. Here’s a download of the full code sample in VS2008.

Postscript: About Naming Conventions

How do I go about naming my object mother? I have lots of options: Test, Fake, ObjectMother, OM, Dummy. Of course, if I was purist it would be ObjectMother. But that doesn’t sit well with me when explaining it others. Although it does make explicit the pattern being used, I find fake most useful (eg FakeUser). It rolls off the tongue, it is self evident enough to outsiders as a strategy. Test (eg TestUser), on the other hand, is generic enough that I have to remind myself of its purpose and then I find I always need to do quick translations from the tests themselves. For example, with UserTest and TestUser I have read them to check which I am working with. For these samples, I have used TestUser. If you come back to my production code you will find FakeUser. I hope that doesn’t prove problematic in the long run as it isn’t really a fake.

Naked Planning – Arlo Belshee from Agile 2008

November 11th, 2008 No comments

There is a nice pod cast (Agile Toolkit Podcast) by a guy Arlo Belshee on Agile Toolkit Podcast, Naked Planning, Promiscuous Planning and other Unmentionables (sorry don’t have a url). He has a couple of points that are interesting (around 17-23mins). He is basically using Lean-type queuing for planning. I am interested in his ideas around prioritisation and estimation. He is basically arguing that in prioritisation having estimations creates a selection bias. That is, the business does not necessary choose the options that are best for them (ie delivering business value in the form of cash or a long-term differentiator). He argues against a traditional cost-benefit analysis. This analysis is a 2 × 2 grid – cost (x) by value (y) as scatterplot as below:


          Cost-Benefit
          ============
       ^
high   |                                    long-term value (market differentiator)
       |    short-term value (cash)
       |                                   
(value)|                         
       |                                   
       |    low-cost/low-value (death march)
low    |                                       never done (expensive and little value)
       +--------------------------------------------------------------------------------------------------------------------->
           low                    (cost)                 high 
    

In this approach there are four main positions: (1) top-left is the short-term value options that are cheap and easy to do but provide high value. These tend to get you cash in the market place. But competition can also copy and innovate at the same rate. So they are valuable only in the short term and are also known as cash cows. (2) top-right are the long-value options. These are the market differentiators that you want to build. Because they high a cost, they are harder for the competitions to create. (3) bottom-right are the low cost and value. These are the items that we tend to think of as quick wins. He points out that they are the death march. They merely distract from delivering value. (4) bottom-right are normally thrown out and not a problem.

He argues that we need to remove, the x axis, cost. Prioritisation should only be about business value and lined up in order. In fact, he points out that when you take away the cost, his group opts for high value propositions. This mixture of short and long-term value propositions is a better product mix. But if you leave in the cost, people do tend want to include the death march items.

Therefore, do value-based analysis. Rather than cost-benefit, do benefit-benefit analysis. This approach hide costs, or in fact never does the estimations in the first place. Because having an estimation is a negative business value proposition and actually has a positive cost. It is negative business value proposition because it creates a selection bias that people select non-business value options. It has positive cost because someone has the make up the estimations. It has the other cost that the made up part is just that – made up. Better to work with a queue of real wait time over the duration of the project.

He continues on to talk about estimations as wait time before starting the next MMF.

Categories: Uncategorized Tags: , , , ,