Archive

Posts Tagged ‘DDD’

Repository Pattern and SharePoint to facade PropertyBag

May 23rd, 2011 No comments

Introduction

Microsoft Patterns and Practices recommend the facading of SharePoint with the repository pattern. If you are an object-oriented programmer that request is straightforward. If your not then it isn’t. There are few examples of this practice and most code samples in SharePoint work directly with the API and SharePoint is scattered throughout the entire code base. If you haven’t read much about this there is a good section in Freeman and Pryce (Growing Object Oriented Software: Guide by Tests) about this approach – they relate this approach back to Cockburn’s ports and adapters and Evans’ Anti-corruption layer. I personally think about it as an anti-corruption layer.

In this example, I will give two examples of how we will avoid SharePoint having too much reach into codebase when using Properties. If we were to not use this solution the case base would be very EASY. Whenever we want a value we would use this code snippet: SPFarm.Local.Properties[key].ToString() (with some Security.RunWithElevatedPrivileges). Using this approach, at best we are likely to see the key as a global constant in some register of keys.

This type of code does not fit the Freeman and Pryce mantra to prefer to write maintainable code over code that is easy to write. Maintainable code has separation of concerns, abstractions and encapsulation – this is also testable code. So in the end in this example, what you’ll see is a lot more code but what’ll you also hopefully appreciate is that we are teasing out domain concepts where SharePoint happens only to the technical implementation.

So, the quick problem domain. We have two simple concepts: a site location and an environment. We have decided that our solution requires both of these pieces of information to be stored in SharePoint. In this case, we have further decided (rightly or wrongly – possibly wrongly) that we are going to let a little bit of SharePoint leak in that both a site location and environment as really property bag values – we make this decision because the current developers think it is easier in the long run. So, we decided against the EASY option.

Easy Option

Create a register:

public class EnvironmentKeys {
  public const string SiteLocationKey = "SiteUrl";
  public const string EnvironmentKey = "Environment";
}

Access it anytime either get:

  var siteUrl = SPFarm.Local.Properties[SiteLocationKey] 

Or update:

  SPFarm.Local.Properties[SiteLocationKey] = "http://newlocation/";
  SPFarm.Local.Update();  // don't worry about privileges as yet

Maintainable option

We are going to create two domain concepts: SiteLocation and Environment both of which are PropertyBagItem and that it will be fronted by a PropertyBagRepository that will allow us to Find or Save. Note: we’ve decided to be a little technology bound because we are using the notion of a property bag when we could just front each domain concept with respository. We can always refactor later – the other agenda here is getting SharePoint devs exposure to writing code using generics.

Here are our domain concepts.

Let’s start with our property bag item contract:

public abstract PropertyBagItem
{
  abstract string Key { get; }
  abstract string Value { get; set; }
}

It has two obvious parts: key and value. Most important here is that we don’t orphan the key from the domain concept. This allows us to avoid the problem of a global register of keys.

And let’s have a new SiteLocation class.

public class SiteLocation : PropertyBagItem
{
  public string Key { get { return "SiteLocationKey"; } }
  public string Value { get; set; }
}

Now, let’s write a test for finding and saving a SiteLocation. This is a pretty ugly test because it requires one being set up. Let’s live with it for this sample.

[TestFixture]

public class PropertyBagItemRepositoryTest
{
  private PropertyBagItemRepository _repos;

  [SetUp]
  public void Setup()
  {
    _repos = new PropertyBagItemRepository();
    _repos.Save(new SiteLocation("http://mysites-test/"));
  }

  [Test]
  public void CanFind()
  {
    Assert.That(_repos.Find<SiteLocation>().Value, Is.EqualTo("http://mysites-test/"));
  }

} 

Now, we’ll look at a possible implementation:

public class PropertyBagItemRepository
{
  private readonly Logger _logger = Logger.Get();

  public T Find<T>() where T : PropertyBagItem, new()
  {
    var property = new T();
    _logger.TraceToDeveloper("PropertyBagItemRepository: Finding key: {0}", property.Key);
    return Security.RunWithElevatedPrivileges(() =>
        {
          if (SPFarm.Local.Properties.ContainsKey(property.Key))
          {
            property.Value = SPFarm.Local.Properties[property.Key].ToString();
            _logger.TraceToDeveloper("PropertyBagItemRepository: Found key with property {0}", property.Value);
          }
          _logger.TraceToDeveloper("PropertyBagItemRepository: Unable to find key: {0}", property.Key);
        return property;
        });
  }
}

That should work and we could then add more tests and an implementation for the Save which might look like – I prefer chaining so I return @T@:

public T Save<T>(T property) where T : PropertyBagItem
{
  _logger.TraceToDeveloper("PropertyBagValueRepository: Save key: {0}", key);
  Security.RunWithElevatedPrivileges(() =>
  {
    SPFarm.Local.Properties[key] = property.Value;
    SPFarm.Local.Update();
  });
  return property;
}

Finally, let’s look at our next domain concept the environment. In this case, we want to enumerate all environments. So, we’ll write our integration test (yes, we should have a unit test for this domain concept first):

[Test]
public void CanFindEnvironment()
{
  Assert.That(new PropertyBagItemRepository().Find<Environment>().Code, Is.EqualTo(Environment.EnvironmentCode.DEVINT));
}

And now we can see that the implementation is a little more complex than the SiteLocation but that we can encapsulate the details well enough – actually, there is some dodgy code but the point is to illustrate that we need to keep environment logic, parsing and checking altogether:

public class Environment : PropertyBagItem
{
  public enum EnvironmentCode { PROD, PREPROD, TEST, DEV }

  public string Key { get { return "EnvironmentKey" } }
  private EnvironmentCode _code;
  public string Value 
  { 
    get { return Enum.GetName(typeof (EnvironmentCode), _code); }
    set { _code = value; }
  }

  
  public Environment(EnvironmentCode code)
  {
    Code = code;
  }

  public Environment(string code)
  {
    Code = Parse(code);
  }
  
  public Environment() : this(EnvironmentCode.DEV) // new() constraint on Find<T> requires parameterless constructor
  {
  }

  public static EnvironmentCode Parse(string property)
  {
    try
    {
      return (EnvironmentCode)Enum.Parse(typeof(EnvironmentCode), property, true);
    }
    catch (Exception)
    { 
      return EnvironmentCode.DEV;
    }
  }
}

It wasn’t that much work really was it?

Categories: Uncategorized Tags: , ,

Test Strategy in SharePoint: Part 4 – Event Receiver as layered Feature

December 12th, 2010 No comments

In Test Strategy in SharePoint: Part 3 – Event Receiver as procedural, untestable feature we came up with some nice code that we believe to be nicer – the code was going to be layered and more testable. This entry will look at the tests and the code to make that code come alive.

The starting design – which may get tweaked a little as we go. In practice, we started with this design in mind (ie use attributes to declarative decide on what to provision) but refactored the original code test-first trying to work out what was unit vs integration testable and what code was in the domain, infrastructure and ui layers (as based on good layering to aid testability). We won’t take you through that process but it did only take a few hours to knock out the first attempt.

Here we’ll take you through the creation of one type of the page the NewPage. We have put the others there to show that we made the design because we are going to require many types of pages and we are hoping that the benefit of an attribute and its declarative style will payoff against its cost. We are looking for accessibility and maintainability as we bring on new developers – or come back to it ourselves in a couple of weeks!

using System.Runtime.InteropServices;
using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Infrastructure;
using YourCompany.SharePoint.Infrastructure.Configuration;
using Microsoft.Office.Server.UserProfiles;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;

namespace YourCompany.SharePoint.MySites.Features.WebFeatureProvisioner
{
    [Guid("cb9c03cd-6349-4a1c-8872-1b5032932a04")]
    public class SiteFeatureEventReceiver : SPFeatureReceiver
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        [ActivateFeature(PackageCfg.PublishingWebFeature)]
        [RemovePage("Pages/default.aspx")]
        [MasterPage("CustomV4.master", MasterPage.MasterPageType.User)]
        [MasterPage("CustomMySite.master", MasterPage.MasterPageType.Host)]
        public override void FeatureActivated(SPFeatureReceiverProperties properties)
        {
            PersonalSiteProvisioner
              .Create(properties.Feature.Parent as SPWeb)
              .Process();
        }
    }
} 

Overview of strategy

I want to drive as much testing back into unit tests and the rest can go to integration testing. However, there is another issue here. As a team member, I actually want to have a record of the operational part of the tests because this is about installation/deployment and it is at the stage of provisioning/activation. So what we’ll need to do is write a BDD-style acceptance test to tease out the feature activation process too. Thus:

  • the acceptance test will have the ability to actually activate a feature
  • the unit test should help specify any specifics of this activation – which is the abstraction mechanism we will use to get code abstractions
  • the integration test be any tests proving specific API calls

System Acceptance test

Before we try and drive out a design, let’s understand what needs to be done. To write this acceptance test requires a good knowledge of SharePoint provisioning – so these are technically-focussed acceptance tests rather than business one’s.

We will write a system test Acceptance\Provisioning\ (we will implement this in StoryQ later on)

Story is Solution Deployment

In order to create new pages for user
As a user
I want a 'MySites' available

With scenario have a new feature
  Given I have a new wsp package mysites.wsp
  When site is deployed
    And I am on site http://mysites/personal
  Then Publishing Site Feature is site activated

Design

We now start to drive out some code concepts from tests. I think that our TODO list is something like this:

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Unit test: have an attribute

Because this is the first test, I will add in the namespace that we are driving out code from the domain and in the unit test project. This code is creating an attribute that we can to represent a new page. This is straightforward code that we want to be able to simple say what names of the pages are that we want to be in the new page.

namespace Test.Unit.Provisioning
{
    [TestFixture]
    public class SiteProvisioningAttributesTests
    {
        [Test]
        public void ProvisioningHasHomePage()
        {
            Assert.IsTrue(typeof(TestClass).GetMethod("OneNewPage").GetCustomAttributes(typeof(NewPageAttribute), false).Count() == 1);
        }

        [Test]
        public void CanReturnPageValues()
        {
            var page = ((IProvisioningAttribute)typeof(TestClass).GetMethod("OneNewPage").GetCustomAttributes(typeof(NewPageAttribute), false)).Page;
            Assert.AreEqual("Home.aspx", page.Name);
        }

        public class TestClass
        {
            [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
            public void OneNewPage()
            {

            }
        }
    }
} 

Now the skeleton code for the attribute – it will need to do provisioning later on but let’s leave that for now. At this stage, we are just going to make the attribute be able to return the value object of a page:

using System;
using YourCompany.SharePoint.Domain.Model;
using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Domain.Services.Provisioning;

namespace YourCompany.SharePoint.Infrastructure.Provisioning
{
    [Serializable, AttributeUsage(AttributeTargets.Method, Inherited = false, AllowMultiple = true)]
    public class NewPageAttribute : Attribute, IProvisioningAttribute
    {
        public Page Page { get; private set; }
        public NewPageAttribute(string name, string title, string pageLayout)
        {
            Page = new Page(name, title, pageLayout);
        }
    }
} 
public struct Page
{
    public string Name { get; private set; }
    public string Title { get; private set; }
    public string PageLayout { get; private set; }
    
    public Page(string name, string title, string pageLayout)
        : this()
    {
        Name = name;
        Title = title;
        PageLayout = pageLayout;
    }
}s
public interface IProvisioningAttribute
{
    Page Page { get; }
}

Unit test: be able to read the attributes from a class

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Now that we have an attribute designed that can return the values for a page, now we need to create a parser across the class that can return the attributes.


[TestFixture]
public class SiteProvisioningAttributesTests
{
    [Test]
    public void NewProvisioningCountIsOneForNewPageMethod()
    {
        var pages = AttributeParser.Parse<NewPageAttribute>(typeof(TestClass), "OneNewPage");
        Assert.IsTrue(pages.Count() == 1);
    }

    [Test]
    public void NewProvisioningCountIsTwoForNewPageMethod()
    {
        var pages = AttributeParser.Parse<NewPageAttribute>(Type, "TwoNewPages");
        Assert.IsTrue(pages.Count() == 2);
    }

    public class TestClass
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        public void OneNewPage()
        {
        }

        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        [NewPage("Home2.aspx", "Home2 Page", "HomePage2.aspx")]
        public void TwoNewPages()
        {
        }
    }
}

With code something like this we are going to get all the New pages.

public class AttributeParser
{
    public static IEnumerable<Page> Parse<T>(Type type, string method)
        where T : IProvisioningAttribute
    {
            return type.GetMethod(method).GetCustomAttributes(typeof(T), false)
                .Select(attribute => ((T)attribute).Page));
    }
} 

Now that we are return a page by iterating over the the method, I can now see that we don’t want a page per se but rather a specific type of publisher.

Unit tests: return a publisher for a page rather than a page

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Returning a publisher is going to be refactor as we are adding a new concept through the attribute. Let’s rewrite the test that our parser is going to instead return an explicit IPagePublisher rather than an implicit Page.

[TestFixture]
public class SiteProvisioningAttributesTests
{
    [Test]
    public void NewProvisioningCountIsOneForNewPageMethod()
    {
        var pages = AttributeParser.Parse<IPagePublisher, NewPageAttribute>(typeof(TestClass), "OneNewPage");
        Assert.IsTrue(pages.Count() == 1);
    }

    public class TestClass
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        public void OneNewPage()
        {
        }
    }
}

As we unpick this small change, we get the introduction of a new interface IPagePublisher. In practice, I add an empty interface but here I want to show that we introduce the concept of publishing without actual implementations – this is the benefit that we are looking for. In summary, I give the activator a “new page” with details and it provides back to me a publisher that knows how to deal with this information. That sounds fair to me.

public interface IPagePublisher
{
    void Add();
    void Delete();
    void CheckIn();
    void CheckOut();
    void Publish();
    bool IsPublished();
    bool IsProvisioned();
} 

So now our AttributeParser becomes:

public class AttributeParser
{
    public static IEnumerable<T> Parse<T, T1>(Type type, string method)
        where T : IPagePublisher
        where T1 : IProvisioningAttribute
    {
        return type.GetMethod(method).GetCustomAttributes(typeof(T1), false)
            .Select(attribute => ((T1)attribute).Publisher(((T1)attribute).Page))
            .Cast<T>();
    }
} 

Which requires an change to our interface. This may change require a little bit of explanation for some. Why the Func? Bascially, we can have a property that accepts parameters and we can swap out its implementation as needed (for testing).

public interface IProvisioningAttribute
{
    Func<Page, IPagePublisher> Publisher { get; }
    Page Page { get; }
}

So the concrete implementation now becomes:

[Serializable, AttributeUsage(AttributeTargets.Method, Inherited = false, AllowMultiple = true)]
public class NewPageAttribute : Attribute, IProvisioningAttribute
{
    public Func<IPage, IPagePublisher> Publisher
    {
        get { return new PagePublisher(Page); }
    }

    public IPage Page { get; private set; }
    public NewPageAttribute(string name, string title, string pageLayout)
    {
        Page = new Page(name, title, pageLayout);
    }
} 

Wow, all we did was add IPagePublisher to AttributeParser.Parse<IPagePublisher, NewPageAttribute>(typeof(TestClass), "OneNewPage"); and we got all that code!

Unit tests: create a service for processing a publisher

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

So far, we are able to make a method with attributes for each page. When we parse these attributes, we get back the publishers with the page. Now we need to be able to process a publisher. We are going to move into mocking out a publisher to check that the publisher is called in the service. This service is a provisioner and it will be provisioning the personal site. So hopefully calling it the PersonalSiteProvisioner makes sense.

Let’s look at the code below. We are creating a new personal site provisioner and inside this we want to ensure that the page publisher actually adds a page. Note: we are deferring how the page is actually added. We just want to know that we are calling add. (Note: we using Moq as the isolation framework.)

using Moq;

[TestFixture]
public class SiteProvisioningTest
{
    [Test]
    public void CanAddPage()
    {
        var page = new Mock<IPagePublisher>();
        var provisioner = new PersonalSiteProvisioner(page.Object);

        provisioner.Process();

        page.Verify(x => x.Add());
    }
 }

So here’s the code to satisfy the test:

public interface IProvisioner
{
    void Process();
}

With the implementation:

namespace YourCompany.SharePoint.Domain.Services.Provisioning
{
    public class PersonalSiteProvisioner : IProvisioner
    {
        public List<IPagePublisher> PagePublishers { get; private set; }

          public PersonalSiteProvisioner(List<IPagePublisher> publishers)
          {
              PagePublishers = publishers;
          }
          public PersonalSiteProvisioner(IPagePublisher publisher) 
            : this(new List<IPagePublisher>{publisher})
          {
          }

        public void Process()
        {
            PagePublishers.TryForEach(x =>x.Add());
        }
    }
} 

Right, that all looks good. We can Process some publishers in our provisioner. Here’s some of the beauty (IMHO) that we can add to these tests without going near an integration point. Let’s add few more tests like adding multiple pages, adding and deleting, ensuring that there is error handling and then trying different combinations of adding and deleting and finally that if one adding errors that we can still process others in the list. Take a look at the tests (p.s the implementation in the end is easy once we had tests but it took an hour or so).

Unit tests: really writing the provisioner across most cases

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class
[TestFixture]
public class SiteProvisioningTest
{

    [Test]
    public void CanAddMultiplePages()
    {
        var page = new Mock<IPagePublisher>();
        var provisioner = new PersonalSiteProvisioner(new List<IPagePublisher> { page.Object, page.Object });

        provisioner.Process();

        page.Verify(x => x.Add(), Times.Exactly(2));
    }

    [Test]
    public void CanRemovePage()
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.IsPublished()).Returns(true);

        var provisioner = new PersonalSiteProvisioner(page.Object);

        provisioner.Process();

        page.Verify(x => x.Delete());
    }

    [Test]
    public void RemovingPageThatDoesntExistDegradesNicely()
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.IsPublished()).Returns(true);
        page.Setup(x => x.Delete()).Throws(new Exception());

        var provisioner = new PersonalSiteProvisioner(page.Object);

        provisioner.Process();

        page.Verify(x => x.Delete());
    }

    [Test]
    [Sequential]
    public void SecondItemIsProcessedWhenFirstItemThrowsException(
        [Values(false, true)] bool delete,
        [Values(true, false)] bool add)
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.IsPublished()).ReturnsInOrder(
            () => delete,
            () => add);
        page.Setup(x => x.Delete()).Callback(() => { throw new Exception(); });

        var provisioner = new PersonalSiteProvisioner(new List<IPagePublisher> { page.Object, page.Object, page.Object });
        provisioner.Process();

        page.Verify(x => x.Add(), Times.Once());
        page.Verify(x => x.Delete(), Times.Once());
    }

    [Test]
    public void AlreadyInstalledPagesWontReinstallCatchesException()
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.Add()).Throws(new Exception());

        var provisioner = new PersonalSiteProvisioner(page.Object);
        provisioner.Process();

        page.Verify(x => x.Add(), Times.Once());
    }
}

and the implementation is below. Just a couple of notes: this class should have logging in it and that this should also be interact tested (this is the key point in the system that we need in the logs) and also there is a helper wrapper TryForEach which is a simple wrapper around the Linq ForEach that null checks. Believe it or not, the tests above actually drove out a lot of errors even in this small piece of code because it had to deal with list processing. We now don’t have to deal with these issues at integration (and particularly in production).

namespace YourCompany.SharePoint.Domain.Services.Provisioning
{
    public class PersonalSiteProvisioner : IProvisioner
    {
        public List<IPagePublisher> PagePublishers { get; private set; }

        public PersonalSiteProvisioner(List<IPagePublisher> publishers)
        {
            PagePublishers = publishers;
        }
        public PersonalSiteProvisioner(IPagePublisher publisher) 
          : this(new List<IPagePublisher>{publisher})
        {
        }

        public void Process()
        {
            PagePublishers.TryForEach(x =>
            {
                if (x.IsPublished()) {
                    x.Delete();
                } else {
                    x.Add();
                }
            });
        }
    }
} 

Now we are ready to do something with SharePoint.

Integration tests: publish pages in the context of SharePoint

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Now we have have to deal with SharePoint. This means that we are going to an integration test. In terms of the classes, are now ready to implement the PagePublisher. Let’s write a test. The basic design is that we can we will hand in the publishing web context (and page) and then add it. (Note: we could have handed in only the web context in the constructor and then the page dependency in the method – this looks better in hindsight). The code then asserts that the item exists. Now, those of you familiar with the SharePoint API know that neither the Site.OpenPublishingWeb or SiteExpectations.ExpectListItem calls exist. These are wrappers to make the code more readable. We’ll include those below.

What you need to be seeing is that there no references to SharePoint in the Domain code – which in the flip-side is that the Infrastructure code is the place where these live.

using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Infrastructure;
using Microsoft.SharePoint.Publishing;
using NUnit.Framework;
using Test.Unit;

namespace Test.Integration.Provisioning
{
    [TestFixture]
    public class AddPagePublisherTest
    {
        [Test]
        public void CanAddPage()
        {
            var page = new Page("Home.aspx", "Home Page", "HomePage.aspx");   
            Site.OpenWeb("http://mysites/", x => new PagePublisher(x.Web, page).Add());
            SiteExpectations.ExpectListItem("http://mysites/", x => x.Name == "Home.aspx");
        }
    }
}

Additional wrapper code – one in Infrastructure and the other in the Test.Integration project – these both hide the repetitive complexity of SharePoint.

namespace YourCompany.SharePoint.Infrastructure
{
    public static class Site
    {
        public static void OpenWeb(string url, Action<SPWeb> action)
        {
            using (var site = new SPSite(url))
            using (var web = site.OpenWeb())
            {
                action(web);
            }
        }
        public static void OpenPublishingWeb(string url, Action<PublishingWeb> action)
        {
            using (var site = new SPSite(url))
            using (var web = site.OpenWeb())
            {
                action(PublishingWeb.GetPublishingWeb(web));
            }
        }
    }
}

And the test assertion:

namespace Test.Integration
{
   public class SiteExpectations
   {
       public static void ExpectListItem(string url, Func<SPListItem, bool> action)
       {
           Site.OpenPublishingWeb(url, x => Assert.IsTrue(x.PagesList.Items.Cast<SPListItem>().Where(action).Any()));
       }
   }
}

So now let’s look at (some of) the code to implement Add. Sorry, it’s a bit long but should be easy to read. Our page publisher code doesn’t look a lot different from the original code we found in the blog Customising SharePoint 2010 MySites with Publishing Sites in in the SetupMySite code. It is perhaps a little clearer because there are a few extra private methods that help describe the process that SharePoint requires when creating a new page. But the other code would have got there eventually.

The key difference is how we get there. The correctness of this code was confirmed by the integration test. In fact, we had to run this code tens (or perhaps one hundred) times to iron the idiosyncrasies of the SharePoint API - particularly the case around DoUnsafeUpdates. Yet, we didn’t have to deploy the entire solution each time. And no way do we have to debug by attaching a process. There were some times we were at a loss and did resort to the debugger but we were able to get context in Debug in the test runner. All of this has lead to increases in speed and stability.

There’s a final win in design. This code has one responsibility: to add a page. No more, no less. When we come to add multiple pages we don’t change this code. If we come to add another type of page – perhaps we don’t change this code either but create another type of publisher. Later on we can work out whether these publishers have shared, base code. A decision we can defer until refactoring.

using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;

namespace YourCompany.SharePoint.Infrastructure.Provisioning
{
    public class PagePublisher : IPagePublisher
    {
        public const string SiteCreated = "MySiteCreated";

        private readonly Page _page;
        private readonly SPWeb _context;
        private PublishingWeb _pubWeb;

        public PagePublisher(SPWeb web, IPage publishedPage)
        {
            _page = publishedPage;
            _context = web;
        }

        public void Add()
        {
           using (var web = _site_context.OpenWeb()) {
             _pubWeb = web;
             if (_page.Equals(null) && !IsProvisioned()) return;

             DisableVersioning();

             if (!HomePageExists)
                 CreatePage();

             AddAsDefault();
             Commit();

             if (!IsProvisioned())
                 DoUnsafeUpdates(x =>
                 {
                     x.Properties.Add(SiteCreated, "true");
                     x.Properties.Update();
                 });
           }
        }

        public bool IsProvisioned()
        {
            return _context.Properties.ContainsKey(SiteCreated);
        }

        private void AddAsDefault()
        {
            _pubWeb.SetDefaultPage(HomePage.ListItem.File);
        }

        private void Commit()
        {
            _pubWeb.Update();
        }

        private bool HomePageExists
        {
            get { return _pubWeb.HasPublishingPage(_page.Name);  }
        }

        private void DoUnsafeUpdates(Action<SPWeb> action)
        {
            var currentState = _context.AllowUnsafeUpdates;
            _context.AllowUnsafeUpdates = true;
            action(_context);
            _context.AllowUnsafeUpdates = currentState;
        }

        private void DisableVersioning()
        {
            var pages = _pubWeb.PagesList;
            pages.EnableVersioning = false;
            pages.ForceCheckout = false;
            pages.Update();
        }

        private void CreatePage()
        {
            var layout = _pubWeb.GetAvailablePageLayouts()
                .Where(p => p.Name == _page.PageLayout)
                .SingleOrDefault();

            var page = _pubWeb.GetPublishingPages().Add(_page.Name, layout);
            page.Title = _page.Title;
            page.Update();
        }
    }
}

You should note that our tests against SharePoint are actually small and neat. In this case, it tests with one dependency (SharePoint) and one interaction (Add). The tests should be that simple. Setup and teardown are where it gets a little harder. Below requires a Setup which swaps out the original page so that the new one can be added and that we know that it is different. Teardown cleans up the temp publishing page. Note: at this stage the code in the Setup has not been abstracted into its own helper – we could do this later.

namespace Test.Integration.Provisioning
{
    [TestFixture]
    public class AddPagePublisherTest
    {
        private Page _publishedPage;
        private const readonly string HomeAspx = "http://mysites/";
        readonly string _homePage = a.TestUser.HomePage;
        private PublishingPage _tempPublishingPage;

        [SetUp]
        public void SetUp()
        {
            _publishedPage = new Page("Home.aspx", "Home Page", "HomePage.aspx");

            Site.OpenPublishingWeb(_homePage, x =>
            {
                var homePageLayout = x.GetPublishingPageLayout(_publishedPage.PageLayout);
                _tempPublishingPage = x.AddPublishingPage("TempPage.aspx", homePageLayout);
                x.SetDefaultPage(_tempPublishingPage.ListItem.File);
                x.Update();
                x.DeletePublishingPage(_publishedPage.Name);
            });
        }

        [Test]
        public void CanAddPage()
        {
            Site.OpenWeb(_homePage, x => new PagePublisher(x.Web, _publishedPage).Add());
            SiteExpectations.ExpectListItem(_homePage, x => x.Name == HomeAspx);
        }

        [Teardown]
        public void Teardown()
        {
            Site.OpenWeb(_homePage, x => x.DeletePublishingPage(_tempPublishingPage.Name));
        }
    }
} 

System: check that it all works in SharePoint when deployed

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

The final step in this cadence to create system tests by completing the acceptance tests. Theoretically, this step should not be needed because the code talks to SharePoint. In practice, this step finds problems and cleans up code at the same time. Let’s return to the test that we originally wrote and have been delivering to – but not coding against. We are now going to implement system test using test-last development. Here is the story:

Story is Solution Deployment

In order to create new pages for user
As a user
I want a MySites available

With scenario have a new feature
  Given I have a new wsp package mysites.wsp
  When site is deployed
    And I am on site http://mysites/personal/45678
  Then Publishing Site Feature F6924D36-2FA8-4f0b-B16D-06B7250180FA is site activated

You may notice in the code that above, we have actually added some more information from the original story. We now know that the personal site has an Id in it (http://mysites/personal/45678) and we also know the GUID of the site (F6924D36-2FA8-4f0b-B16D-06B7250180FA). Adding, changing and morphing system tests often happens, particularly as we learn more about our implementation – in our case, we have to run the analysts/product owner through these changes!

Now we need to find a library to provide an implementation. There are number of libraries: Cucumber, nBehave or StoryQ. We have chosen StoryQ. StoryQ has a GUI converter that can take the text input above and turn it into C# skeleton code as per below and then we fill out the code (Cucumber keeps the story separate from implementation). StoryQ will then output the results of the tests in the test runner.

  using StoryQ;
  using NUnit.Framework;

  namespace Test.System.Acceptance.Provisioning
  {
      [TestFixture]
      public class SolutionDeploymentTest
      {
          [Test]
          public void SolutionDeployment()
          {
              new Story("Solution Deployment")
                  .InOrderTo("create new pages for users")
                  .AsA("user")
                  .IWant("a MySites available")

                  .WithScenario("have a new feature")
                    .Given(IHaveANewWspPackage_, "mysites.wsp")
                    .When(SiteIsDeployed)
                      .And(IAmOnSite_, "http://mysites/personal/684945")
                    .Then(__IsSiteActivated, "Publishing Site Feature", "F6924D36-2FA8-4f0b-B16D-06B7250180FA")
                 
                  .Execute();
          }

          private void IHaveANewWspPackage_(string wsp)
          {
              throw new NotImplementedException();
          }

          private void SiteIsDeployed()
          {
              throw new NotImplementedException();
          }

          private void IAmOnSite_(string site)
          {
              throw new NotImplementedException();
          }    

          private void __IsSiteActivated(string title, string guid)
          {
              throw new NotImplementedException();
          }
      }
  }

Our implementation of the system is surprisingly simple. Because we are provisionin new features, the main test is that the deployment sequence has worked. So we are going to make two checks (once the code is deployed), is the solution deployed? is the feature activated? To do this we are going to need to write a couple of abstractions around the SharePoint API: Solution.IsDeployed & Feature.IsSiteActivated. We do this because we want the system tests to remain extremely clean.

  using StoryQ;
  using NUnit.Framework;

  namespace Test.System.Acceptance.Provisioning
  {
      [TestFixture]
      public class SolutionDeploymentTest
      {
          [Test]
          public void SolutionDeployment()
          {
              new Story("Solution Deployment")
                  .InOrderTo("create new pages for users")
                  .AsA("user")
                  .IWant("a MySites available")

                  .WithScenario("have a new feature")
                    .Given(IHaveANewWspPackage_, "mysites.wsp")
                    .When(SiteIsDeployed)
                      .And(IAmOnSite_, "http://mysites/personal/684945")
                    .Then(__IsSiteActivated, "Publishing Site Feature", "F6924D36-2FA8-4f0b-B16D-06B7250180FA")
                 
                  .Execute();
          }

          private string _site;
          private string _wsp;

          private void IHaveANewWspPackage_(string wsp)
          {
              _wsp = wsp;
          }

          private void SiteIsDeployed()
          {
              Assert.IsTrue(Solution.IsDeployed(_wsp));
          }

          private void IAmOnSite_(string site)
          {
              _site = site;
          }    

          private void __IsSiteActivated(string title, string guid)
          {
              Assert.IsTrue(Feature.IsSiteActivated(_httpMysitesPersonal, guid));
          }
      }
  }

Here are the wrappers around the SharePoint API that we are going to get reuse. Sometimes we wrap the current API SPFarm:

public static class Solution
{
    public static bool IsDeployed(string wsp)
    {
        return SPFarm.Local.Solutions
            .Where(x => x.Name == wsp)
            .Any();
    }
} 

Sometimes we are wrapping our on wrappers Site.Open.

public static class Feature
{
    public static bool IsSiteActivated(string webUrl, string feature)
    {
        return Site.Open(webUrl, site => 
             site.Features
                .Where(x =>
                    x.Definition.Id == new Guid(feature) &&
                    x.Definition.Status == SPObjectStatus.Online).Any()
             );
    }
} 

Starting to wrap up the cadence for adding a feature

Below is the TODO list that we started with as we layered our code with a test automation pyramid straegy and are now up to adding new publishers

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

So far we have only implemented NewPage and still have ActivateFeature, RemovePage and MasterPage to go:

public class SiteFeatureEventReceiver : SPFeatureReceiver
{
    [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
    [ActivateFeature(PackageCfg.PublishingWebFeature)]
    [RemovePage("Pages/default.aspx")]
    [MasterPage("CustomV4.master", MasterPage.MasterPageType.User)]
    [MasterPage("CustomMySite.master", MasterPage.MasterPageType.Host)]
    public override void FeatureActivated(SPFeatureReceiverProperties properties)
    {
        PersonalSiteProvisioner
          .Create(properties.Feature.Parent as SPWeb)
          .Process();
    }
}

To bring in the new pages, it would be a mistake to start implementing the attributes. Instead, we need to write a system test. This may merely be an extension of the current system test. Then we need to proceed through unit and integration tests. What we’ll find is that as we add a new page we are likely to find that we are going to need to add a new concept of a factory of some sort that will return our publishers for each provisioning attribute.

So let me finish with the new acceptance test for [ActivateFeature(PackageCfg.PublishingWebFeature)] and a TODO list.

Story is Solution Deployment

In order to create new pages for user
As a user
I want a MySites available

With scenario have a new feature
  Given I have a new wsp package mysites.wsp
  When site is deployed
    And I am on site http://mysites/personal/45678
  Then Publishing Site Feature F6924D36-2FA8-4f0b-B16D-06B7250180FA is site activated
    And Publishing Feature is web activated

And the TODO now becomes:

TODO

  • have a new attribute ActivateFeature (unit)
  • be able to read the attribute (unit)
  • return a publisher for a page rather than a page ActivateFeaturePublisher? (unit)
  • extend service for processing a publisher
  • actually activate feature in the context of SharePoint (integration)
  • complete acceptance test (system)

Convinced?

This is a lot of work. Layering tends to do this. But it provides the basis for scaling code base. Quite quickly we have found that there are patterns for reuse, we pick up SharePoint API problems in integration tests rather than post-deployment in system tests, that we can move around the code base easily-ish, we can refactor because code is under test. All of this provides us with the possibility for scale benefits in terms of speed and quality. A system that is easy to code tends not to provide these scale benefits.

Categories: Uncategorized Tags: , ,

Test Strategy in SharePoint: Part 3 – Event Receiver as procedural, untestable feature

December 5th, 2010 No comments

Here I cover procedural, untestable code and to do so will cover three pieces of SharePoint code. The first is sample code that suggest just how easy it is to write SharePoint code. The second is production quality I code written in the style of the first and yet it becomes unmaintainable. The third piece of code is what we refactored the second piece to become and we think it’s maintainable. But it will come at cost (and benefit) that I explore in “Test Strategy in SharePoint: Part 4 – Event Receiver as layered Feature”.

Sample One: sample code is easy code

Here’s a current sample that we will find on the web that we find is actually the code that makes its way into production code rather than remaining sample code – so no offence to the author. Sharemuch writes:

When building publishing site using SharePoint 2010 it’s quite common to have few web parts that will make it to every page (or close to every page) on the site. An example could be a custom secondary navigation which you may choose to make a web part to allow some user configuration. This means you need to provision such web part on each and every page that requires it – right? Well, there is another solution. What you can do is to define your web part in a page layout module just like you would in a page. In MOSS this trick would ensure your web part will make it to every page that inherits your custom layout; not so in SharePoint 2010. One solution to that is to define the web part in the page layout, and programmatically copy web parts from page layout to pages inheriting them. In my case I will demonstrate how to achieve this by a feature receiver inside a feature that will be activate in site template during site creation. This way every time the site is created and pages are provisioned – my feature receiver will copy web parts from page layout to those newly created pages.

	public override void FeatureActivated(SPFeatureReceiverProperties properties)
	{
	    SPWeb web = properties.Feature.Parent as SPWeb;

	    if (null != web)
	    {
	        PublishingWeb pubWeb = PublishingWeb.GetPublishingWeb(web);
	        SPList pages = pubWeb.PagesList;

	        foreach (SPListItem page in pages.Items)
	        {
	            PublishingPage pubPage = PublishingPage.GetPublishingPage(page);
	            pubPage.CheckOut();
	            CopyWebParts(pubPage.Url, web, pubPage.Layout.ServerRelativeUrl, pubPage.Layout.ListItem.Web);
	            pubPage.CheckIn("Webparts copied from page layout");
	        }
	    }
	}

	private void CopyWebParts(string pageUrl, SPWeb pageWeb, string pageLayoutUrl, SPWeb pageLayoutWeb)
	{
	    SPWeb web = null;
	    SPWeb web2 = null;
	    SPpageWebPartManager pageWebPartManager = pageWeb.GetpageWebPartManager(pageUrl, PersonalizationScope.Shared);
	    SPpageWebPartManager pageLayoutWebPartManager = pageLayoutWeb.GetpageWebPartManager(pageLayoutUrl, PersonalizationScope.Shared);
	    web2 = pageWebPartManager.Web;
	    web = pageLayoutWebPartManager.Web;
	    SPLimitedWebPartCollection webParts = pageLayoutWebPartManager.WebParts;
	    SPLimitedWebPartCollection parts2 = pageWebPartManager.WebParts;
	    foreach (System.Web.UI.WebControls.WebParts.WebPart part in webParts)
	    {
	        if (!part.IsClosed)
	        {
	            System.Web.UI.WebControls.WebParts.WebPart webPart = parts2[part.ID];
	            if (webPart == null)
	            {
	                string zoneID = pageLayoutWebPartManager.GetZoneID(part);
	                pageWebPartManager.AddWebPart(part, zoneID, part.ZoneIndex);
	            }
	        }
	    }
	}

This sample code sets a tone that this is maintainable code. For example, there is some abstraction with the CopyWebParts method remaining separate from the activation code. Yet, if I put it against the four elements of simple design, the private method maximises clarity but won’t get past passing tests.

Let’s take a look at some production quality code that I have encountered then refactored to make it maintainable code.

Sample Two: easy code goes production

All things dev puts up the sample for Customising SharePoint 2010 MySites with Publishing Sites. Here is the code below that follows the same patterns: clarity is created through class scope refactoring of private methods. But we still still see magic string constants, local error handling, procedural style coding against the SharePoint API (in SetupMySite(). The result is code that is easy to write, easy to deploy, manual to test and reuse is through block-copy inheritance (ie copy and paste).

using System;
using System.Runtime.InteropServices;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;
using System.Linq;

namespace YourCompany.SharePoint.Sites.MySites
{

   [Guid("cd93e644-553f-4486-91ad-86e428c89723")]
   public class MySitesProvisionerReceiver : SPFeatureReceiver
   {

         private const string MySiteCreated = "MySiteCreated";
         private const string ResourceFile = "MySiteResources";
         private const uint _lang = 1033;

         public override void FeatureActivated(SPFeatureReceiverProperties properties)
         {

		         using (SPWeb web = properties.Feature.Parent as SPWeb)
		         {
		            //run only if MySite hasn't been created yet as feature could be run after provisioning as well
		            if (web.Properties.ContainsKey(MySiteCreated))
		              return;

		            ActivatePublishingFeature(web);
		            SetUpMySite(web);
		         }
   			 }

			   private void ActivatePublishingFeature(SPWeb web)
			   {

			         //Activate Publishing Web feature as the stapler seems to not do this consistently
			         try
			         {
			             web.Features.Add(new Guid("94C94CA6-B32F-4da9-A9E3-1F3D343D7ECB"));
			         }
			         catch (Exception)
			         {
			             //already activated
			          }

			   }

			   private void SetUpMySite(SPWeb web)
			   {

			         //turn off versioning, optional but keeps it easier for users as they are the only users of their MySite home page 
			         var pubWeb = PublishingWeb.GetPublishingWeb(web);
			         var pages = pubWeb.PagesList;
			         pages.EnableVersioning = false;
			         pages.ForceCheckout = false;
			         pages.Update();

			         //set custom masterpage
			         var customMasterPageUrl = string.Format("{0}/_catalogs/masterpage/CustomV4.master", web.ServerRelativeUrl);
			         web.CustomMasterUrl = customMasterPageUrl;
			         web.MasterUrl = customMasterPageUrl;

			         var layout = pubWeb.GetAvailablePageLayouts().Cast<PageLayout>()
			                                                  .Where(p => p.Name == "HomePage.aspx")
			                                                  .SingleOrDefault();

			         //set default page
			         var homePage = pubWeb.GetPublishingPages().Add("Home.aspx", layout);
			         homePage.Title = "Home Page";
			         homePage.Update();
			         pubWeb.DefaultPage = homePage.ListItem.File;

			         //Add initial webparts
			         WebPartHelper.WebPartManager(web,
			         homePage.ListItem.File.ServerRelativeUrl,
			         Resources.Get(ResourceFile, "MySiteSettingsListName", _lang),
			         Resources.Get(ResourceFile, "InitialWebPartsFileName", _lang));

			         web.AllowUnsafeUpdates = true;
			         web.Properties.Add(MySiteCreated, "true");
			         web.Properties.Update();
			         pubWeb.Update();

			         //set the search centre url
			         web.AllProperties["SRCH_ENH_FTR_URL"] = Resources.Get(ResourceFile, "SearchCentreUrl", _lang);
			         web.Update();

			         //delete default page
			         var defaultPageFile = web.GetFile("Pages/default.aspx");
			         defaultPageFile.Delete();
			         web.AllowUnsafeUpdates = false;

			   }
     }
}	
	

There is for me one more key issue. What does it really do? I was struck by the unreadability of this code and was concerned that there are so many working parts here and how they would all be combined.

Sample Three: wouldn’t this be nice?

Here’s what we refactored that code to. Hopefully there is some more intention in this. You may read it like this: I have a Personal Site that I create and process with a new page, removing and exsiting page, being activated and then getting a couple of master pages.

I like this because I can immediately ask simple questions, why do I have to remove an existing page and why are there two master pages. It’s SharePoint and there’s of course good reasons. But I now am abstracting away what SharePoint has to do for me to get this feature activated. It’s not perfect but is a good enough example to work on.

using System.Runtime.InteropServices;
using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Infrastructure;
using YourCompany.SharePoint.Infrastructure.Configuration;
using Microsoft.Office.Server.UserProfiles;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;

namespace YourCompany.SharePoint.MySites.Features.WebFeatureProvisioner
{
    [Guid("cb9c03cd-6349-4a1c-8872-1b5032932a04")]
    public class SiteFeatureEventReceiver : SPFeatureReceiver
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        [ActivateFeature(PackageCfg.PublishingWebFeature)]
        [RemovePage("Pages/default.aspx")]
        [MasterPage("CustomV4.master", MasterPage.MasterPageType.User)]
        [MasterPage("CustomMySite.master", MasterPage.MasterPageType.Host)]
        public override void FeatureActivated(SPFeatureReceiverProperties properties)
        {
            PersonalSiteProvisionerFactory
              .Create(properties.Feature.Parent as SPWeb)
              .Process();
        }
    }
}	

What I want to suggest is that this code is not necessarily easy to write given the previous solution. We are going to have to bake our own classes around the code: there’s the factory class and the attributes too – we’ll also find that if there are the classes that the factory returns too. In the end, we are should have testable code and my hunch is that we are likely to get some reuse too.

The next entry, “Test Strategy in SharePoint: Part 4 – Event Receiver as layered Feature”, will look at how we can layer the code to make this code become a reality.

Categories: Uncategorized Tags: , ,

Test Strategy in SharePoint: Part 2 – good layering to aid testability

November 28th, 2010 No comments

Test Strategy in SharePoint: Part 2 – good layering to aid testability

Overall goal: write maintainable code over code that is easy to write (Freeman and Price, 2010)

In Part 1 – testing poor layering is not good TDD I have argued that we need to find better ways to think about testing SharePoint wiring code that do not confuse unit and integration tests. In this post, I explain that outline a layering strategy for solutions resolving this problem. Rather than only one project for code and one for tests, I use 3 projects for tests and 4 for the code – this strategy is based on DDD layering and the test automation pyramid.

  • DDD layering projects: Domain, Infrastructure, Application and UI
  • Test projects: System, Integration and Unit

Note: this entry does not give code samples – the next post will – but focuses how the projects are organised within the Visual Studio solution, how they are sequenced when programming. I’ve included a task sheet that we use in Sprint Planning that we use a boiler plate list to mix and match scope of features. Finally, I have a general rave on the need for disciplined test-first and test-last development.

Projects

Here’s the quick overview of the layers, take a look further down for fuller overview.

  • Domain: the project which has representations of the application domain and have no references to other libraries (particularly SharePoint)
  • *Infrastructure: *this project references Domain and has the technology specific implementations. In this case, it has all the SharePoint API implementations
  • Application: this project is a very light orchestration layer. It is a way to get logic out of the UI layer to make it testable. Currently, we actually put all our javascript jQuery widgets in the project (that I will post about later because we unit (BDD-style) all our javascript and thus need to keep it away from the UI)
  • UI: this is the wiring code for SharePoint but has little else – this will more sense once you can see that we Integration test all SharePoint API and this code goes to Infrastructure and that we unit test any models, services or validation and that we put these in Domain. For example, with Event Receivers code in methods is rarely longer than a line or two long.

Test Projects

  • System Acceptance Test: Business focused test that describes the system – these tests should live long term reasonably unchanged
  • System Smoke Test: Tests that can run in any environment that confirm that it is up and running
  • Integration Test: Tests that have 1 dependency and 1 interaction that are usually against third-party API and in this case mainly the SharePoint API - these may create scenarios on each method
  • Unit Test: Tests that have no dependencies (or are mocked out) – model tests, validations, service tests, exception handling

Solution Structure

Below is the source folder of code in the source repository (ie not lib/, scripts/, tools/). The solution file (.sln) lives in the src/ folder.

Taking a look below, we see our 4 layers with 3 tests projects. In this sample layout, I have include folders which suggest that we have code around the provisioning and configuration of the site for deployment – see here for description of our installation strategy. These functional areas exists across multiple projects: they have definitions in the Domain, implementation in the Infrastructure and both unit and integration tests.

I have also included Logging because central to any productivity gains in SharePoint is to use logging and avoid using a debugger. We now rarely attach a debugger for development. And if we do it is not our first tactic as was the previous case.

You may also notice Migrations/ in Infrastructure. These are the migrations that we use with migratordotnet.

Finally, the UI layer should look familiar and this is a subset of folders.

src/
  Application/

  Domain/
    Model/
	    Provisioning/
	    Configuration/
		Services/
	    Provisioning/
	    Configuration/
    Logging/
  
  Infrastructure/
    Provisioning/
    Configuration/
    Logging/    

  Tests.System/
    Acceptance/
	    Provisioning/
	    Configuration/
    Smoke/
  
  Tests.Integration/
    Fixtures/
    Provisioning/
    Configuration/
    Migrations/

  Tests.Unit/
    Fixtures/
    Model/
    Provisioning/
    Configuration/
    Logging/
    Services/
    Site/    
    
  Ui/
    Features/
    Layouts/
    Package/
    PageLayouts/
    Stapler/
    ...

Writing code in our layers in practice

The cadence of the developers work is also based on this separation. It generally looks like this:

  1. write acceptance tests (eg given/when/then)
  2. begin coding with tests
  3. sometimes starting with Unit tests – eg new Features, or jQuery widgets
  4. in practice, because it is SharePoint, move into integration tests to isolate the API task
  5. complete the acceptance tests
  6. write documentation of SharePoint process via screen shots

We also have a task sheet for estimation (for sprint planning) that is based around this cadence.

Task Estimation for story in Scrum around SharePoint feature

A note on test stability

Before I finish this post and start showing some code, I just want to point out that getting stable deployments and stable tests requires discipline. The key issues to allow for are the usual suspects:

  • start scripted deployment as early as possible
  • deploy with scripts as often as possible, if not all the time
  • try to never deploy or configure through the GUI
  • if you are going to require a migration (GUI-based configuration) script it early because while it is faster to do through the GUI this is a developer-level (local) optimisation for efficiency and won’t help with stabilisation in the medium term
  • unit tests are easy to keep stable – if they aren’t then you are serious in trouble
  • integration tests are likely to be hard to keep stable – ensure that you have the correct setup/teardown lifecycle and that you can fairly assume that the system is clean
  • as per any test, make sure integration tests are not dependent on other tests (this is standard stuff)
  • system smoke tests should run immediately after an installation and should be able to be run in any environment at any time
  • system smoke tests should not be destruction precisely because they are run in any environment including production to check that everything is working
  • system smoke tests shouldn’t manage setup/teardown because they are non-destructive
  • system smoke tests should be fast to run and fail
  • get all these tests running on the build server asap

Test-first and test-last development

TDD does not need to be exclusively test-first development. I want to suggest that different layer require different strategies but most importantly there is a consistency to the strategy to help establish cadence. This cadence is going to reduce transaction costs – knowing when done, quality assurance for coverage, moving code out of development. Above I outlined writing code in practice: acceptance test writing, unit, integration and then acceptance test completion.

To do this I test-last acceptance tests. This means that as developers we write BDD style user story (give/when/then) acceptance tests. While this is written first, it rarely is test-driven because we might not then actually implement the story directly (although sometimes we do). Rather we park it. Then we move into the implementation which is encompassed by the user story but we then move into classical unit test assertion mode in unit and integration tests. Now, there is a piece of code that it clearly unit testable (models, validation, services) this is completed test first – and we pair it, we use Resharper support to code outside-in. We may also need to create data access code (ie SharePoint code) and this is created with integration tests. Interestingly, because it is SharePoint we break many rules. I don’t want devs to write Infrastructure code test last but often we need to spike the API. So, we actually spike the code in the Integration test and then refactor to the Infrastructure as quickly as possible. I think that this approach is slow and that we would be best to go to test-first but at this stage we are still getting a handle on good Infrastructure code to wrap the SharePoint API. The main point is that we don’t have untested code in Infrastructure (or Infrastructure code lurking in the UI). These integration tests in my view are test-last in most cases simply because we aren’t driving design from the tests.

At this stage, we have unfinished system acceptance tests, code in the domain and infrastructure (all tested). What we then do is hook the acceptance test code up. We do this instead of hooking up the UI because then we don’t kid ourselves whether or not the correct abstraction has been created. In hooking up the acceptance tests, we can simply hook up in the UI. However, the reverse has often not been the case. Nonetheless, the most important issue that we have hooked up our Domain/Infrastructure code by two clients (acceptance and UI) and this tends to prove that we have a maintainable level of abstraction for the current functionality/complexity. This approach is akin to when you have a problem and you go to multiple people to talk about it. By the time you have had multiple perspectives, you tend to get clarity about the issues. Similarly, in allowing our code to have multiple conversations in the form of client libraries consume them, we know the sorts of issues are code are going have – and hopefully, because it is software, we have refactored the big ones out (ie we can live the level of cohesion and coupling for now).

I suspect for framework or even line of business applications, and SharePoint being one of many, we should live with the test-first and test-last tension. Test-first is a deep conversation that in my view covers off so many of the issues. However, like life, these conversations are not always the best to be had every time. But for the important issues, they will always need to be had and I prefer to have them early and often.

None of this means that individual developers get to choose which parts get test-first and test-last. It requires discipline to use the same sequencing for each feature. This takes time for developers to learn and leadership to encourage (actually, enforce, review and refine). I am finding that team members can learn the rules of the particular code base in between 4-8 weeks if that is any help.

Test Strategy in SharePoint: Part 1 – testing poor layering is not good TDD

November 27th, 2010 2 comments

Test Strategy in SharePoint: Part 1 – testing poor layering is not good TDD

Overall goal: write maintainable code over code that is easy to write (Freeman and Price, 2010)

My review of SharePoint and TDD indicated that we need to use layering techniques to isolate SharePoint and not to confuse unit tests with integration tests. Let’s now dive into what is the nature of the code we write in SharePoint.

This post of one of two posts. This first one will review two existing code samples that demonstrate testing practices: SharePoint Magic 8 Ball & Unit Testing SharePoint Foundation with Microsoft Pex and Moles: Tutorial for Writing Isolated Unit Tests for SharePoint Foundation Applications. I argue against using these practices because this type of mocking encourages poor code design through exclusively using unit tests and not isolating approach integration tests. In the second post, I will cover examples that demonstrate layering and separation of unit and integration tests, including how to structure Visual Studio solutions.

What is the nature of SharePoint coding practice?

At this stage, I am not looking at “customisation” code (ie WebParts) but rather “extension” code practices where we effectively configure up the new site.

  1. programming code is often “wiring” deployment code (eg files mostly of xml) that then needs to be glued together via .net code against the SharePoint API.
  2. resulting code is problematically procedural in nature and untestable outside its deployment environment
  3. this makes its error prone and slow
  4. it also encourages developers to work in large batches, across longer than needed time frames and provide slow feedback
  5. problems tend to occur in later environments

A good summary how the nature of a SharePoint solution may not lend itself to classical TDD:

So, now lets take a typical SharePoint project. Of course there is a range and gamut of SharePoint projects, but lets pick the average summation of them all. Thus in a typical SharePoint project, the portion where TDD is actually applicable is very small – which is the writing code part. In most, not all, SharePoint projects, we write code as small bandaids across the system, to cover that last mile – we don’t write code to build the entire platform, in fact the emphasis is to write as little code as possible, while meeting the requirements. So already, the applicability of TDD as a total percentage of the project is much smaller. Now, lets look at the code we write for SharePoint. These small bandaids that can be independent of each other, are comprised of some C#/VB.NET code, but a large portion of the code is XML files. These large portion of XML files, especially the most complex parts, define the UI – something TDD is not good at testing anyway. Yes I know attempts have been made, but attempt != standard. And the parts that are even pure C#, we deal with an API which does not lend itself well to TDD. You can TDD SharePoint code, but it’s just much harder.

What we found when stuck to the procedural, wiring up techniques is:

  1. write, repetitive long blocks of code
  2. funky behaviours in SharePoint
  3. long periods of time sitting around watching the progress bar in the browser
  4. disagreement and confusion between (knowledgable) developers on what they should be doing and what was happening
  5. block copy inheritance (cut-and-paste) within and between visual studio solution
  6. no automated testing

Looking at that problem we have formulated the charitable position that standard SharePoint techniques allow you to write easy code – or more correctly, allow you to copy and paste others code and hack it to your needs. We decided that we would try to write maintainable code. This type of code is layered and SOLID

Okay so where are the code samples for us to follow?

There are two main samples to look at and both try and deal with the SharePoint API. Let me cover these off first before I show where we went.

SharePoint Magic 8 Ball

SharePoint Magic 8 Ball is some sample source code from Best Practices SharePoint conference in 2009. This a nice piece of code to demonstrate both how to abstract away SharePoint and how not to worry about unit testing SharePoint.

Let me try and explain. The functionality of the code is that you ask a question of a “magic” ball and get a response – the response is actually just picking an “answer” randomly from a list. The design is a simple one in two parts: (1) the list provider for all the answers and (2) the picker (aka the Ball). The list is retrieved from SharePoint and the picker takes a list from somewhere.

The PickerTest (aka the ball)

 [TestFixture]
 public class BallPickerTest
 {
     [Test]
     public void ShouldReturnAnAnswerWhenAskedQuestion()
     {
         var ball = new Ball(); 
         answer = ball.AskQuestion("Will it work?");

         Assert.IsNotNull(answer);
     }
} 

And the Ball class

public class Ball
{
    public List<string> Answers { get; set }
    
    public Ball()
    {
        Answers = new List<string>{"yes"};
    }

    public string AskQuestion(string p)
    {
        Random random = new Random();
        int item = random.Next(Answers.Count);
        return Answers[item];
    }
}

That’s straightforward. So then the SharePoint class gets the persisted list.

public class SharePoint
{
    public List<string> GetAnswersFromList(string ListName)
    {
        List<String> answers = new List<string>();
        SPList list = SPContext.Current.Web.Lists[ListName];
        foreach (SPListItem item in list.Items)
        {
            answers.Add(item.Title);
        }
        return answers;
    }
}

Here’s the test that shows how you can mock out SharePoint. In this case, the tester is using TypeMock. Personally, what I am going to argue is that isn’t an appropriate test. You can do it, but I wouldn’t bother. I’ll come back to how I would rather write an integration test.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NUnit.Framework;
using TypeMock.ArrangeActAssert;
using Microsoft.SharePoint;

namespace BPC.Magic8Ball.Test
{
    [TestFixture]
    public class SharePointTest
    {
        [Test]
        public void ShouldReturnListOfAnswersFromSharePointList()
        {
              SPList fakeList = Isolate.Fake.Instance<SPList>();
              Isolate.WhenCalled(() => SPContext.Current.Web.Lists["ListName"]).WillReturn(fakeList);
              SPListItem fakeAnswerOne = Isolate.Fake.Instance<SPListItem>();
              Isolate.WhenCalled(() => fakeAnswerOne.Title).WillReturn("AnswerZero");
              Isolate.WhenCalled(() => fakeList.Items).WillReturnCollectionValuesOf(new List<SPItem>{fakeAnswerOne });                              
  
              var answers = new SharePoint().GetAnswersFromList("ListName");

              Assert.AreEqual("AnswerOne", answers.First());
        }
    }
}

In the code above, it nicely demonstrates that TypeMock can mock out a static class that lives somewhere else. Put differently, you don’t have use dependency injection patterns to do your mocking. I want to argue that this is a poor example because this is not a practical unit test – and I’m sure that if I wasn’t lazy I could find the test smell here already documented in xUnit Test Patterns by Gerald Mezaros. Least the naming of the classes!

The key problem is that sample project is that the real test here is that there is good decoupling between the Picker (the ball) and the answers (sharepoint list). If there is any mocking to go on, it should be that the answers (sharepoint list) is actually invoked. If this the case, then this also points to a potential code smell and that is that the dependency of the answer list (either as List or SharePoint) is documented clearly. In other words, it might want to be passed in into the constructor. Now you might even argue that the ball should have a property for the answers but rather a reference to the SharePoint (answer getter). This may seem a small point but are important because we need designs that scale – and decoupling has been proven to do so.

So, instead of this code:

var ball = new Ball();
var sharePoint = new SharePoint();

ball.Answers = sharePoint.GetAnswersFromList(ListName);
var answer = ball.AskQuestion("MyQuestion");

You are more likely to have if hand in the list:

var ball = new Ball(new SharePoint.GetAnswersFromList(ListName));
ball.Answers = sharePoint.GetAnswersFromList("Your answer");
var answer = ball.AskQuestion("MyQuestion");

Or if the SharePoint is a dependency:

var ball = new Ball(new SharePoint(ListName));
ball.Answers = sharePoint.GetAnswersFromList("Your answer");
var answer = ball.AskQuestion("MyQuestion");


What is at the heart of this sample is that it is trying to unit test everything. Instead it should split tests into unit and integration tests. I have documented elsewhere very specific definitions of unit and integration tests:

  • Unit – a test that has no dependencies (do our objects do the right thing, are they convenient to work with?)
  • Integration – a test that has only one dependency and tests one interaction (usually, does our code work against code that we can’t change?)

What would I do?

  • unit test the ball – there are a number of tests around the AskQuestion() method – does the randomness work, can I hand it a list of answers. If I am going to hand in the SharePoint class – I can then can handin a fakeSharePoint class for stubs and then mock the class to check that it actually calls the GetAnswersFromList() method.
  • integration test the SharePoint class – the integration test requires that SharePoint is up and running. This is a great integration test because it checks that you have all your ducks lined up for a SPContext call and that you have used the correct part of the API. Interestingly, the harder part of the integration tests are getting the setup context correct. Having to deal with the Setup (and Teardown) now is in fact one of the most important benefits of these types of integration tests. We need to remember that these integration APIs merely confirm that the API is behaving as we would expect in the context in which we are currently working. No more. No less.

In summary, this sample showing particular layering in the code but not in the testing. Layering your tests is as important. In SharePoint, we have had the most success in creating abstractions exclusively for SharePoint and testing these as integration tests. These SharePoint abstractions have also been created in a way that we can hand in mock implementations into other layers so that we can unit test the parts of the code that have logic. This is a simple design that effectively makes SharePoint just another persistence-type layer. There is a little more to it than that but that isn’t for here.

Let’s turn to Pex and Moles and see if it provides an option any better to TypeMock. I suspect not because the issue here isn’t the design of the test system – these both can intercept classes hidden and used somewhere else with my own delegates – but rather the design of our code. I’m hoping to use tools that help me write better more maintainable code – so far both of them look like the “easy code” smell.

Pex and Moles

So I’ve headed off to Unit Testing SharePoint Foundation with Microsoft Pex and Moles: Tutorial for Writing Isolated Unit Tests for SharePoint Foundation Applications. It is a great document that tells us why SharePoint is not a framework built with testability baked in.

The Unit Testing Challenge. The primary goal of unit testing is to take the smallest piece of testable software in your application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Unit testing has proven its value, because a large percentage of defects are identified during its use.
The most common approach to isolate production code in a unit test requires you to write drivers to simulate a call into that code and create stubs to simulate the functionality of classes used by the production code. This can be tedious for developers, and might cause unit testing to be placed at a lower priority in your testing strategy.

It is especially difficult to create unit tests for SharePoint Foundation applications because:

* You cannot execute the functions of the underlying SharePoint Object Model without being connected to a live SharePoint Server.

* The SharePoint Object Model-including classes such as SPSite and SPWeb-does not allow you to inject fake service implementations, because most of the SharePoint Object Model classes are sealed types with non-public constructors.

Unit Testing for SharePoint Foundation: Pex and Moles. This tutorial introduces you to processes and concepts for testing applications created with SharePoint Foundation, using:

* The Moles framework – a testing framework that allows you to isolate .NET code by replacing any method with your own delegate, bypassing any hard-coded dependencies in the .NET code.

* Microsoft Pex – an automated testing tool that exercises all the code paths in a .NET code, identifies potential issues, and automatically generates a test suite that covers corner cases.

Microsoft Pex and the Moles framework help you overcome the difficulty and other barriers to unit testing applications for SharePoint Foundation, so that you can prioritize unit testing in your strategy to reap the benefits of greater defect detection in your development cycle.

The Pex and Moles sample code works through this example. It shows you how to mock out all the dependencies required from this quite typical piece of code. Taking a quick look we have these dependencies:

  • SPSite returning a SPWeb
  • Lists on web
  • then GetItemById on Lists returning an SPListItem as item
  • SystemUpdate on item
public void UpdateTitle(SPItemEventProperties properties) 
{ 
	using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) 
	{ 
		SPList list = web.Lists[properties.ListId]; 
		SPListItem item = list.GetItemById(properties.ListItemId); 
		item["Title"] = item["ContentType"]; 
		item.SystemUpdate(false); 
	} 
}

Here’s a quick sample to give you a feeling if don’t want to read the pdf. In this code, the first thing to know is that we use a “M” prefix by convention to intercept classes. In this case, MSPSite intercepts the SPSite and then returns a MSPWeb on which we override the Lists and return a MSPListCollection.

string url = "http://someURL"; 

MSPSite.ConstructorString = (site, _url) => 
{ 
	new MSPSite(site) 
	{ 
		OpenWeb = () => new MSPWeb
		{ 
			Dispose = () => { }, 
			ListsGet = () => new MSPListCollection 

 ...
 

There is no doubt that Pex and Moles is up to the job of mocking these out. Go and look at the article yourself. Others have commented on getting use to the syntax and I agree that because I am unfamiliar it is not as easy as say Moq. But it seems similar to MSpec. That’s not my gripe. My grip is that the tool helps bake badness into my design. Take the example where the sample adds validation logic into the above code, the authors seem to think that this is okay.

public void UpdateTitle(SPItemEventProperties properties) 
{ 
	using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) 
	{ 
		SPList list = web.Lists[properties.ListId]; 
		SPListItem item = list.GetItemById(properties.ListItemId); 
		
		string content = (string)item["ContentType"]; 
		if (content.Length < 5) 
			throw new ArgumentException("too short"); 
		if (content.Length > 60) 
			throw new ArgumentOutOfRangeException("too long"); 
		if (content.Contains("\r\n")) 
			throw new ArgumentException("no new lines"); 
		item["Title"] = content; 

		item.SystemUpdate(false); 
	} 
}	

In the sample, described as “a more realistic example to test”, it just encourages poor separation when writing line of business applications. This is validation logic and there is no reason whatsoever to test validation alongside the need for data connections. Furthermore, there is little separation of concerns around exception throwing and handling and logging.

I’m therefore still frustrated that we are being shown easy to write code. Both Pex and Moles and TypeMock (regardless of how cool or great they are) are solutions for easy to write code. In this sample, we should see get separation of concerns. There are models, there are validators, there is the checking for validations and error handling. All these concerns can be written as unit tests and with standard libraries.

We will then also need integration tests to check that we can get list items. But these can be abstracted into other classes and importantly other layers. If we do that we will also avoid the duplication of code that we currently see with the using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) code block. I am going to return to this in another post which is less about layering but about some specific SharePoint tactics once we have a layering strategy in place.

Categories: Uncategorized Tags: , ,

validating-object-mothers

October 24th, 2010 2 comments

I was in a pairing session and we decided that that this syntax described here is way too noisy:

new Banner { Name=&quot;&quot; }.SetupWithDefaultValuesFrom(ValidMO.Banner).IsValid().ShouldBeFalse()

So we rewrote it using anonymous objects which made it nicer to read:

a.Banner.isValid().with(new { Name="john" })

A couple of notes on this style:

  • I am using the style of lowercase for test helpers. As per above a, isValid and with.
  • I am using nested classes to create a DSL feel: eg as@a@._object_ replaces the idea of ValidOM._object_
  • I am wrapping multiple assertions in isValid and isInvalid (they are wrappers that call the domain object IsValid and provide assertions – these assertions can be overwritten for different test frameworks which at this stage is MSTest)

Unit test object builder

This code has two tests: Valid and InvalidHasNoName. The first test checks that I have setup (and maintained) the canonical model correctly and that it behaves well with the validator. The second test demonstrates testing invalid by exception. Here there rule is that the banner is invalid if it has no name.

  using Domain.Model;
  using Microsoft.VisualStudio.TestTools.UnitTesting;

  namespace UnitTests.Domain.Model
  {
      [TestClass]
      public class BannerPropertyValidationTest
      {
          [TestMethod]
          public void Valid()
          {
              a.Banner.isValid();
          }

          [TestMethod]
          public void InvalidHasNoName()
          {
              a.Banner.isInvalid().with(new {Name = ""});
          }
      } 
  }

Here’s the canonical form that gives us the object as a.Banner:

  namespace UnitTests.Domain
  {
      public static class a
      {
          public static Banner Banner
          {
              get  { return new Banner  { Name = "Saver" }; }
          }
      }
  }

At this stage, you should have a reasonable grasp of the object through the tests and what we think is exemplar scenario data. Here’s the model with validations. I won’t show but there is an extension methods

  using Castle.Components.Validator;

  namespace Domain.Model
  {
      public class Banner
      {
        [ValidateNonEmpty]
        public virtual string Name { get; set; }
      }
  }

Builder extensions: with

Now for the extensions that hold this code together. It is a simple piece of code that takes the model and anonymous class and merges them. As a side note I originally thought that I would use LinFu to do this but it can’t duck type against concrete classes, only interfaces. And I don’t have interfaces on a domain model.

	using System;

	namespace Domain
	{
	    public static class BuilderExtensions
	    {
	        public static T with<T>(this T model, object anon) where T : class
	        {
	            foreach (var anonProp in anon.GetType().GetProperties())
	            {
	                var modelProp = model.GetType().GetProperty(anonProp.Name);
	                if (modelProp != null)
	                {
	                    modelProp.SetValue(model, anonProp.GetValue(anon, null), null);
	                }
	            }
	            return model;
	        }
	    }
	}

So if you want to understand how the different scenarios it works on, here’s the tests:

	using Domain;
	using Domain.Model;
	using Microsoft.VisualStudio.TestTools.UnitTesting;

	namespace Tests.Unit
	{
	    [TestClass]
	    public class WithTest
	    {
	        [TestMethod]
	        public void UpdateBannerWithName_ChangesName()
	        {
	            var a = new Banner { Name = "john" };
	            var b = a.with(new { Name = "12345" });
	            Assert.AreEqual(b.Name, "12345");
	        }

	        [TestMethod]
	        public void UpdateBannerWithEmptyName_ChangesName()
	        {
	            var a = new Banner { Name = "john" };
	            var b = a.with(new { Name = "" });
	            Assert.AreEqual(b.Name, "");
	        }

	        [TestMethod]
	        public void UpdateBannerWithEmptyName_ChangesNameAsRef()
	        {
	            var a = new Banner { Name = "john" };
	            a.with(new { Name = "" });
	            Assert.AreEqual(a.Name, "");
	        }

	        [TestMethod]
	        public void UpdateBannerChainedWith_ChangesNameAndDescriptionAsRef()
	        {
	            var a = new Banner { Name = "john" };
	            a.with(new { Name = "" }).with(new { Description = "hi" });
	            Assert.AreEqual(a.Name, "");
	            Assert.AreEqual(a.Description, "hi");
	        }

	        [TestMethod]
	        public void UpdateBannerWithName_ChangesNameOnly()
	        {
	            var a = new Banner { Name = "john", Description = "ab" };
	            var b = a.with(new { Name = "12345" });
	            Assert.AreEqual(b.Name, "12345");
	            Assert.AreEqual(b.Description, "ab");
	        }

	        [TestMethod]
	        public void UpdateBannerWithPropertyDoesntExist_IsIgnored()
	        {
	            var a = new Banner { Name = "john" };
	            var b = a.with(new { John = "12345" });
	            // nothing happens!
	        }
	    }
	}

Builder extensions: validators

Now that we have the builder in place, we want to be able to do assertions on the new values in the object. Here’s what we are looking for a.Banner.isValid().with(new{Name="john"}). The complexity here is that we want to write the isValid or isInvalid before the with. We felt that it read better. This adds a little complexity to the code – but not much.

The general structure below goes something like this:

  1. add an extension method for the isValid on a domain object
  2. that helper returns a ValidationContext in the form of a concrete Validator with a test assertion
  3. we need to create another with on a ValidationContext to run the Validator
  4. Finally, in the with we chain the with with the anonymous class and do the assert
	using Domain;
	using Domain.Model;
	using Microsoft.VisualStudio.TestTools.UnitTesting;

	namespace Tests.Unit
	{
	    public static class ValidationContextExtensions
	    {
	        public static ValidationContext<T> isInvalid<T>(this T model)
	        {
	            return new Validator<T>(model, (a) => Assert.IsFalse(a));
	        }

	        public static ValidationContext<T> isValid<T>(this T model)
	        {
	            return new Validator<T>(model, (a) => Assert.IsTrue(a));
	        }

	        public static T with<T>(this ValidationContext<T> model, object exceptions) where T : class
	        {
	            model.Test.with(exceptions);
	            model.Assert(model.Test.IsValid());
	            return model.Test;
	        }
	    }

	    public interface ValidationContext<T>
	    {
	        T Test { get;  set; }
	        Action<bool> Assertion { get;  set; }
	        void Assert(bool isValid);
	    }

	    public class Validator<T> : ValidationContext<T>
	    {
	        public Validator(T test, Action<bool> assertion) {
	            Test = test;
	            Assertion = assertion;
	        }
	        public T Test { get; set; }
	        public virtual Action<bool> Assertion { get; set; }
	        public void Assert(bool value)
	        {
	            Assertion.Invoke(value);
	        }
	    }
	}

Just a last comment about the interface and implementation. I have gone for naming the interface differently to the interface (ie not IValidator) because I want to avoid using the I@ convention and see where it takes me. In this case, the @with needs to be able to chain itself to something – to me this is the validation context. This context then has a validator in it (eg valid/invalid). In this case the interface isn’t merely a contract it is actually being used concretely itself without an implementation. In fact, we could almost have the interface living without its properties and methods definitions at this stage, or perhaps the interface could have become an abstract class – either solution would entail less code (but slightly weird conventions).

Righto, a bit more showing of tests last:

	using Domain.Model;
	using Microsoft.VisualStudio.TestTools.UnitTesting;

	namespace Tests.Unit
	{

	    [TestClass]
	    public class ValidationContextTest
	    {
	        private const string NameCorrectLength = "johnmore_than_six";
	        private const string NameTooShort = "john";

	        [TestMethod]
	        public void UpdateBannerWithInvalidExtensionOnWith()
	        {
	            var a = new Banner { Name = NameTooShort };
	            a.isInvalid().with(new { Name = "" });
	        }

	        [TestMethod]
	        public void UpdateBannerWithValidExtensionOnWith()
	        {
	            var a = new Banner { Name = NameTooShort };
	            a.isValid().with(new { Name = NameCorrectLength });
	        }

	        [TestMethod]
	        public void NewBannerIsValidWithoutUsingWith()
	        {
	            var a = new Banner { Name = NameTooShort };
	            a.isValid();
	        }

	        [TestMethod]
	        public void NewBannerIsInvalidWithoutUsingWith()
	        {
	            var a = new Banner { Name = NameCorrectLength };
	            a.isInvalid();
	        }
	    }
	}

Just a final bit, if you are wondering between IsValid with caps and isValid. Here’s validator running code on the domain model that we are wrapping in our validators.

	using Castle.Components.Validator;

	namespace Domain.Model
	{
	    public static class ValidationExtension
	    {
	        private static readonly CachedValidationRegistry Registry = new CachedValidationRegistry();

	        public static bool IsValid<T>(this T model) where T : class 
	        {
	            return new ValidatorRunner(Registry).IsValid(model);
	        }
	    }
	}

I hope that helps.

Object Mothers as Fakes

July 30th, 2009 No comments

Update: I was in a pairing session and we decided that that this syntax is way too noisy. So we have written it here. Effectively to a.Banner.isValid().with(new{Name="john"})

This is a follow to Fakes Arent Object Mothers Or Test Builders. Over the last few weeks I have had the chance to reengage with MO and fakes. I am going to document a helper or two with building fakes using extensions. But before that I want to make an observation or two:

  • I have been naming my mother objects as ValidMO.xxxxx and making sure they are static
  • I find that I have a ValidMO per project and that is better than a shared one. For example, in my unit tests the validMO tends to need Ids populated and that is really good as I explore the domain and have to build up the object graph. In my integration tests, however, Id shouldn’t be populated because that comes from the database and of course changes over time. My MOs in the regression and acceptance test are different again. I wouldn’t have expected this. It is great because I don’t have to spawn yet another project just to hold the MO data as I have in previous projects.
  • I find that I only need one MO per object
  • Needing another valid example suggests in fact a new type of domain object
  • I don’t need to have invalid MOs (I used to)

Now for some code.

When building objects for test, the pattern is to either:

  • provide the MO
  • build the exceptions in a new object and fill out the rest with the MO
  • not use MO at all (very rarely)
  • I use this pattern to build objects with basic objects and not Lists – you’ll need to extend the tests/code for that

To do this I primarily use a builder as an extension. This extension differs between unit and integration tests.

Unit test object builder

This code has two tests: Valid and Invalid. The tests demonstrate a couple of things. The first of which is that I am bad by having three tests in the first test. Let’s just say that’s for readability ;-) The first set of tests check that I have setup the ValidMO correctly and that it behaves well with the extension helper “SetupWithDefaultValuesFrom”. This series of tests has been really important in the development of the extensions and quickly point to problems – and believe me this approach hits limits quickly. Nonetheless it is great for most POCOs.

The second test demonstrates testing invalid by exception. Here there rule is that the image banner is invalid if it has no name. So the code says, make the name empty and populate the rest of the model.

  using Core.Domain.Model;
  using Contrib.TestHelper;
  using Microsoft.VisualStudio.TestTools.UnitTesting;
  using UnitTests.Core.Domain.MotherObjects;
  using NBehave.Spec.MSTest;

  namespace UnitTests.Core.Domain.Model
  {
      /// <summary>
      /// Summary description for Image Banner Property Validation Test
      /// </summary>
      [TestClass]
      public class ImageBannerPropertyValidationTest
      {
          [TestMethod]
          public void Valid()
          {
              ValidMO.ImageBanner.ShouldNotBeNull();
              ValidMO.ImageBanner.IsValid().ShouldBeTrue();
              new ImageBanner().SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeTrue();
          }

          [TestMethod]
          public void InvalidHasNoName()
          {
              new ImageBanner { Name = "" }.SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeFalse();
          }
      } 
  }

Here’s the ValidMO.

  namespace UnitTests.Core.Domain.MotherObjects
  {
      public static class ValidMO
      {
          public static IBanner ImageBanner
          {
              get
              {
                  return new ImageBanner
                             {
                                 Id = 1,
                                 Name = "KiwiSaver",
                                 Url = "http://localhost/repos/first-image.png",
                                 Destination = "http://going-to-after-click.com",
                                 Description = "Kiwisaver Banner for latest Govt initiative",
                                 IsActive = true,
                                 IsDeleted = false
                             };
              }
          }
      }
  }

At this stage, you should have a reasonable grasp of the object through the tests and what we think is exemplar scenario data. Here’s the model if you insist.

  using Castle.Components.Validator;

  namespace Core.Domain.Model
  {
      public class ImageBanner : Banner
      {
          [ValidateNonEmpty, ValidateLength(0, 200)]
          public virtual string Url { get; set; }

          [ValidateNonEmpty, ValidateLength(0, 200)]
          public string Destination { get; set; }
      }
  }

Builder extensions

Now for the extensions that hold this code together. It is a simple piece of code that takes the model and OM and merges them. Before we go too far. It is actually little more ugly than I want it to be and haven’t had time to think of a more elegant solution. There are actually two methods. One that overrides the model based on null/empty properties and one that also see zero ints and empty strings also as null/empty. So generally, you need to two.

  using System;
  using System.Linq;
  using Contrib.Utility;

  namespace Contrib.TestHelper
  {
      /// <summary>
      /// Test helpers on Mother objects
      /// </summary>
      public static class MotherObjectExtensions
      {

          /// <summary>
          /// Setups the specified model ready for test by merging a fake model with the model at hand. The code merges
          /// the properties of the given model with any defaults from the fake. If the value of a property on the model is an int and its value is 0 
          /// it is treated as null and the fake is used instead.
          /// overridden 
          /// </summary>
          /// <typeparam name="T"></typeparam>
          /// <param name="model">The model.</param>
          /// <param name="fake">The fake.</param>
          /// <returns>the model</returns>
          public static T SetupWithDefaultValuesFrom<T>(this T model, T fake)
          {
              var props = from prop in model.GetType().GetProperties() // select model properities to populate the fake because the fake is the actual base
                          where prop.CanWrite
                          && prop.GetValue(model, null) != null
                          && (
                              ((prop.PropertyType == typeof(int) || prop.PropertyType == typeof(int?)) && prop.GetValue(model, null).As<int>() != 0)
                           || ((prop.PropertyType == typeof(long) || prop.PropertyType == typeof(long?)) && prop.GetValue(model, null).As<long>() != 0)
                           || (prop.PropertyType == typeof(string) && prop.GetValue(model, null).As<string>() != String.Empty)
                              )
                          select prop;

              foreach (var prop in props)
                  prop.SetValue(fake, prop.GetValue(model, null), null); //override the fake with model values

              return fake;
          }
   
          /// <summary>
          /// Setups the specified model ready for test by merging a fake model with the model at hand. The code merges
          /// the properties of the given model with any defaults from the fake. This method is the same as <see cref="SetupWithDefaultValuesFrom{T}"/>
          /// except that empty strings or 0 int/long are able to be part of the setup model
          /// overridden 
          /// </summary>
          /// <typeparam name="T"></typeparam>
          /// <param name="model">The model.</param>
          /// <param name="fake">The fake.</param>
          /// <returns>the model</returns>
          public static T SetupWithDefaultValuesFromAllowEmpty<T>(this T model, T fake)
          {
              var props = from prop in model.GetType().GetProperties() // select model properities to populate the fake because the fake is the actual base
                          where prop.CanWrite
                          && prop.GetValue(model, null) != null
                          select prop;

              foreach (var prop in props)
                  prop.SetValue(fake, prop.GetValue(model, null), null); //override the fake with model values

              return fake;
          }
      }
  }

oh, and I just noticed there is another little helper in there too. It is the object.As helper that casts my results in this case as a string. I will include it and thank Rob for the original code and Mark for an update – and probably our employer for sponsoring our after-hours work ;-) :

    using System;
    using System.IO;


    namespace Contrib.Utility
    {
        /// <summary>
        /// Class used for type conversion related extension methods
        /// </summary>
        public static class ConversionExtensions
        {
            public static T As<T>(this object obj) where T : IConvertible
            {
                return obj.As<T>(default(T));
            }

            public static T As<T>(this object obj, T defaultValue) where T : IConvertible
            {
                try
                {
                    string s = obj == null ? null : obj.ToString();
                    if (s != null)
                    {
                        Type type = typeof(T);
                        bool isEnum = typeof(Enum).IsAssignableFrom(type);
                        return (T)(isEnum ?
                            Enum.Parse(type, s, true)
                            : Convert.ChangeType(s, type));
                    }
                }
                catch
                {
                }
                return defaultValue; 
            }

            public static T? AsNullable<T>(this object obj) where T : struct, IConvertible
            {
                try
                {
                    string s = obj as string;
                    if (s != null)
                    {
                        Type type = typeof(T);
                        bool isEnum = typeof(Enum).IsAssignableFrom(type);
                        return (T)(isEnum ?
                            Enum.Parse(type, s, true)
                            : Convert.ChangeType(s, type));
                    }
                }
                catch
                {

                }
                return null;
            }

            public static byte[] ToBytes(this Stream stream)
            {
                int capacity = stream.CanSeek ? (int)stream.Length : 0;
                using (MemoryStream output = new MemoryStream(capacity))
                {
                    int readLength;
                    byte[] buffer = new byte[4096];

                    do
                    {
                        readLength = stream.Read(buffer, 0, buffer.Length);
                        output.Write(buffer, 0, readLength);
                    }
                    while (readLength != 0);

                    return output.ToArray();
                }
            }

            public static string ToUTF8String(this byte[] bytes)
            {
                if (bytes == null)
                    return null;
                else if (bytes.Length == 0)
                    return string.Empty;
                var str = System.Text.Encoding.UTF8.GetString(bytes);

                // If the string begins with the byte order mark
                if (str[0] == '\xFEFF')
                    return str.Substring(1);
                else
                {
                    return str;
                }
            }
        }
    }
    

Finally, if actually do use this code here are tests:

  using Contrib.TestHelper;
  using Microsoft.VisualStudio.TestTools.UnitTesting;
  using NBehave.Spec.MSTest;

  namespace Contrib.Tests.TestHelper
  {
      /// <summary>
      /// Summary description for MotherObject Extensions Test
      /// </summary>
      [TestClass]
      public class MotherObjectExtensionsTest
      {

          [TestMethod]
          public void EmptyString()
          {
              var test = new Test { }.SetupWithDefaultValuesFrom(new Test { EmptyString = "this" });
              test.EmptyString.ShouldEqual("this");
          }
          [TestMethod]
          public void ModelStringIsUsed()
          {
              var test = new Test { EmptyString = "that" }.SetupWithDefaultValuesFrom(new Test { EmptyString = "this" });
              test.EmptyString.ShouldEqual("that");
          }
          [TestMethod]
          public void EmptyStringIsAccepted()
          {
              var test = new Test { EmptyString = "" }.SetupWithDefaultValuesFromAllowEmpty(new Test { EmptyString = "this" });
              test.EmptyString.ShouldEqual("");
          }

          [TestMethod]
          public void ZeroInt()
          {
              var test = new Test { }.SetupWithDefaultValuesFrom(new Test { ZeroInt = 1 });
              test.ZeroInt.ShouldEqual(1);
          }
          [TestMethod]
          public void ModelIntIsUsed()
          {
              var test = new Test { ZeroInt = 2 }.SetupWithDefaultValuesFrom(new Test { ZeroInt = 1 });
              test.ZeroInt.ShouldEqual(2);
          }
          [TestMethod]
          public void ZeroIntIsAccpted()
          {
              var test = new Test { ZeroInt = 0}.SetupWithDefaultValuesFromAllowEmpty(new Test { ZeroInt = 1 });
              test.ZeroInt.ShouldEqual(0);
          }

          [TestMethod]
          public void ZeroLong()
          {
              var test = new Test { }.SetupWithDefaultValuesFrom(new Test { ZeroLong = 1 });
              test.ZeroLong.ShouldEqual(1);
          }
          [TestMethod]
          public void ModelLongIsUsed()
          {
              var test = new Test { ZeroLong = 2 }.SetupWithDefaultValuesFrom(new Test { ZeroLong = 1 });
              test.ZeroLong.ShouldEqual(2);
          }
          [TestMethod]
          public void ZeroLongIsAccepted()
          {
              var test = new Test {ZeroLong = 0}.SetupWithDefaultValuesFromAllowEmpty(new Test { ZeroLong = 1 });
              test.ZeroLong.ShouldEqual(0);
          }

          private class Test
          {
              public string EmptyString { get; set; }
              public int ZeroInt { get; set; }
              public long ZeroLong { get; set; }
          }
      }
  }

I hope that helps.

Fakes Arent Object Mothers Or Test Builders

November 20th, 2008 No comments

Fakes are not Object Mothers or Test Data Builders and Extensions are a bridge to aid BDD through StoryQ

I’ve just spent the week at Agile2008 and have had it pointed out to me that I have been using Fakes as nomenclature when in fact it is an Object Mother pattern. So I have set off to learn about the Object Mother pattern and in doing so came across the Test Data Builder pattern. But because I am currently working in C# 3.5 and using PI/POCOs/DDD, the builder pattern at looking a little heavy and the criticism of Object Mother holds that it ends up as a God object and hence a little too cluttered.

I’ve also found utilising Extensions in C# 3.5 as a good way to keep the OM clean. By adding extensions, it cleans up the code significantly such that BDD through StoryQ became attractive. Extensions have these advantages:

  • Allows you to put a SetUp (Build/Contruct) method on the object itself
  • Makes this/these methods only available within the test project only
  • Keeps data/values separate from setup ie as a cross-cutting concern
  • In tests, specific edge data – or basic tests – aren’t put into object mothers
  • Extensions end up concise and DRY code

Background

I have been using a Fake models as a strategy to unit test (based on Jimmy Nillson’s DDD in C# book). Technically, these are not fakes. According to sites, fakes are a strategy to replace service objects rather than value objects. My use of fakes is rather the “Object Mother” pattern. (Am I the only person six years behind? Probably.) Since knowing about this pattern, I’ve found it on Fowler’s bliki and the original description at XP Universe 2001 (however, I haven’t been able to download the paper as yet – xpuniverse.com isn’t responding).

Having read entries around this pattern, another pattern emerged: the “Test Data Builder” pattern. It is a likable pattern. (In fact, it could be leveraged as an internal DSL - but I haven’t pursued that as yet.) But, given the work I do in C#, it looks a little heavy as it is useful to cope with complex dependencies. In contrast, the fake/object mother I have been using has been really easily to teach, and well liked by, developers.

Basic approach

To test these approaches, I am going to create a simple model: User which has an Address. Both models have validations on the fields and I can ask the model to validate itself. You’ll note that I am using Castle’s validations component. This breaks POCO rules of a single library but in practice is a good trade-off. My basic test heuristic:

  • populate with defaults = valid
  • test each field with an invalid value
  • ensure that inherited objects are also validated

Four Strategies:

  1. Object Mother
  2. Test Data Builder
  3. Extensions with Object Mother
  4. StoryQ with Extensions (and Object Mother)

Strategy One: Object Mother

First, the test loads up a valid user through a static method (or property if you wish). I use the convention “Valid” to always provide back what is always a valid model. This is a good way to demonstrate to new devs what the exemplar object looks like. Interesting, in C# 3.5, the new constructor convention works very well here. You can read down the list of properties easily, often without need to look at the original model code. Moreover, in the original object, there is no need for an overloaded constructor.

[Test]
 public void ValidUser()
 {
     var fake = TestUser.Valid();
     Assert.IsTrue(fake.IsOKToAccept());
 }

  public class TestUser
   {
       public static User Valid()
       {
           return new User {
               Name = "Hone", 
               Email = "Hone@somewhere.com", 
               Password="Hone", 
               Confirm = "Hone"
       }
   }

Oh, and here’s the User model if you are interested. Goto to Castle Validations if this isn’t clear.

 public class User
 {
     [ValidateLength(1, 4)]
     public string Name { get; set; }

     [ValidateEmail]
     public string Email { get; set; }

     [ValidateSameAs("Confirm")]
     [ValidateLength(1,9)]
     public string Password { get; set; }
     public string Confirm { get; set; }

     public bool IsOKToAccept()
     {
         ValidatorRunner runner = new ValidatorRunner(new CachedValidationRegistry());
         return runner.IsValid(this);
     }
  }

Second, the test works through confirming that each validation works. Here we need to work through each of the fields checking for invalid values. With the object mother pattern, each case is a separate static method: email, name and password. The strategy is to always always start with the valid model, make only one change and test. It’s a simple strategy, easy to read and avoids duplication. The problem with this approach is hiding test data; we have to trust that the method, “invalidName”, is actually an invalid name or follow it through to the static method. In practice, it works well enough and avoids duplication.

public static User InvalidEmail()
{
    var fake = Valid();
    fake.Email = "invalid@@some.com";
    return fake;
}

public static User InvalidName()
{
    var fake = Valid();
    fake.Name = "with_more_than_four";
    return fake;
}

public static User InvalidPassword()
{
    var fake = Valid();
    fake.Password = "not_same";
    fake.Confirm = "different";
    return fake;
}

[Test]
public void InValidEmail()
{
    var fake = TestUser.InvalidEmail();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidName()
{
    var fake = TestUser.InvalidName();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidPassword()
{
    var fake = TestUser.InvalidPassword();
    Assert.IsFalse(fake.IsOKToAccept());
}

Strategy Two: Test Data Builder

I’m not going to spend long on this piece of code because the builder looks like too much work. The pattern is well documented so I assume you already understand it. I do think that it could be useful if you want to create an internal DSL to work through dependencies. Put differently, this example is too simple to demonstrate a case for when to use this pattern (IMHO).

First, here’s the test for the valid user. I found that I injected the valid user behind the scenes (with an object mother) so that I could have a fluent interface with Build(). I can overload the constructor to explicitly inject a user too. I have two options when passing in an object: (1) inject my object mother or (2) inject a locally constructed object. The local construction is useful for explicitly seeing what is being tested. But really, it is the syntactic sugar of C# that is giving visibility rather than the pattern; so, for simple cases, the syntax of the language renders the pattern more verbose.

[Test]
public void ValidUser()
{
    var fake = new UserBuilder().Build();
    Assert.IsTrue(fake.IsOKToAccept());

}

[Test]
public void LocalUser()
{
    var fake = new UserBuilder(TestUser.Valid()).Build();
    Assert.IsTrue(fake.IsOKToAccept());

    fake = new UserBuilder(new User
                    {
                        Name = "Hone", 
                        Email = "good@com.com",
                        Password = "password",
                        Confirm = "password",
                        Address = new Address
                                      {
                                          Street = "Fred",
                                          Number = "19"
                                      }
                    })
                    .Build();
    Assert.IsTrue(fake.IsOKToAccept());
}

public class UserBuilder
{
    private readonly User user;

    public UserBuilder()
    {
        user = TestUser.Valid();
    }

    public UserBuilder(User user)
    {
        this.user = user;
    }

    public User Build() { return user; }
}

Second, let’s validate each field. In the code, and on the positive side, it is clear what constitutes an invalid value and it caters for dependencies such as in “withPassword” between password and confirm. But, really, there is just too much typing; I have to create a method for every field and dependency. For simple models, I am not going to do this. For complex or large models, it would take ages.

[Test]
public void InValidEmail()
{
    var fake = new UserBuilder()
        .withEmail("incorect@@@emai.comc.com.com")
        .Build();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidName()
{
    var fake = new UserBuilder()
        .withName("a_name_longer_than_four_characters")
        .Build();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidPassword()
{
    var fake = new UserBuilder()
        .withPassword("bad_password")
        .Build();
    Assert.IsFalse(fake.IsOKToAccept());
}

public class UserBuilder
{
    private readonly User user;

    public UserBuilder()
    {
        user = TestUser.Valid();
    }

    public UserBuilder(User user)
    {
        this.user = user;
    }

    public UserBuilder withEmail(string email)
    {
        user.Email = email;
        return this;
    }

    public UserBuilder withPassword(string password)
    {
        user.Confirm = password;
        user.Password = password;
        return this;
    }

    public UserBuilder withName(string name)
    {
        user.Name = name;
        return this;
    }

    public UserBuilder withAddress(Address address)
    {
        user.Address = address;
        return this;
    }

    public User Build() { return user; }
}

Strategy Three: Object Mother with Extensions

Having tried the two previous patterns, I now turn to what extensions have to offer me. (Extensions are something I have been meaning to try for a while.) As it turns out extensions combined with the new constructors allow for SOC and DRY and also allow us to separate valid and invalid data in tests. There is a downside of course. It requires some (reflection) code to make it play nicely. More code – a slippery slope some might say …

First, I have added a new method to my User model in the form of an extension. I have named it SetUp so that it translates into the setup and teardown (init and dispose) phases of unit testing. I could use Build or Construct instead. This method returns my object mother. I still like to keep my object mother data separate because I think of construction and data as separate concerns.

[Test]
 public void ValidUser()
 {
     var fake = new User().SetUp();
     Assert.IsTrue(fake.IsOKToAccept());
 }

 public static User SetUp(this User user)
 {
     User fake = TestUser.Valid();
     return fake; 
 }

I also want to test how to create a version of the user visible from the test code. This is where more code is required to combine the any provided fields from the test with the default, valid model. In returning hydrated test user, this method accepts your partially hydrated object and then adds defaults. The goal is that your unit test code only provides specifics for the test. The rest of the valid fields are opaque to your test; the code reflects on your model only hydrating empty fields so that validations do not fail. This reflection code is used by all SetUp methods across models.

[Test]
 public void LocalUser()
 {
     var fake = new User { Name = "John", Email = "valid@someone.com" }.SetUp();
     Assert.IsTrue(fake.IsOKToAccept());
 }

 public static User SetUp(this User user)
 {
     User fake = TestUser.Valid();
     SyncPropertiesWithDefaults(user, fake);
     return fake; 
 }

 private static void SyncPropertiesWithDefaults(object obj, object @base)
   {
       foreach (PropertyInfo prop in obj.GetType().GetProperties())
       {
           object[] val = new object[1];
           val[0] = obj.GetType().GetProperty(prop.Name).GetValue(obj, null);
           if (val[0] != null)
           {
               obj.GetType().InvokeMember(prop.Name, BindingFlags.SetProperty, Type.DefaultBinder, @base, val);
           }
       }
   }

Second, let’s again look at the code to validate all the fields. Now, note at this point there no other object mothers. I see this strategy says, use object mothers to model significant data and avoid cluttering these with edge cases. The result is to hand in the field/value to isolate and then test. I find this readable and it addresses the concern of not making visible the edge case (invalid) data.

[Test]
public void InValidEmail()
{
    var fake = new User { Email = "BAD@@someone.com" }.SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidName()
{
    var fake = new User { Name = "too_long_a_name" }.SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidPassword()
{
    var fake = new User { Password = "password_one", Confirm = "password_two" }.SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

Using extensions combined with constructors are nice. We can also format the page to make it look like a fluent interface if need. For example:

[Test]
public void InValidPassword()
{
    var fake = new User 
                { 
                    Password = "password_one", 
                    Confirm = "password_two" 
                }
                .SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

Strategy Four: BDD and StoryQ

Having worked out that using extensions reduce repetitious code, I still think there is a smell here. Are those edge cases going to add value in the current form? They really are bland. Sure, the code is tested. But, really, who wants to read and review those tests? I certainly didn’t use them to develop the model code; they merely assert overtime that my assumptions haven’t changed. Let’s look at how I would have written the same tests using BDD and in particularly StoryQ. Here’s a potential usage of my User model.

Story: Creating and maintaining users

  As an user
  I want to have an account
  So that I can register, login and return to the site

  Scenario 1: Registration Page
    Given I enter my name, address, email and password     
    When username isn't longer than 4 characters           
      And email is validate                                
      And password is correct length and matches confirm   
      And address is valid                                 
    Then I can log now login
      And I am sent a confirmation email

Leaving aside any problems in the workflow (which there are), the story works with the model in context. Here is the code that generates this story:

[Test]
public void Users()
{
    Story story = new Story("Creating and maintaining users");

    story.AsA("user")
        .IWant("to have an account")
        .SoThat("I can register, login and return to the site");

    story.WithScenario("Registration Page")
        .Given(Narrative.Exec("I enter my name, address, email and password", 
                 () => Assert.IsTrue(new User().SetUp().IsOKToAccept())))
        
        .When(Narrative.Exec("username isn't longer than 4 characters", 
                 () => Assert.IsFalse(new User{Name = "Too_long"}.SetUp().IsOKToAccept())))

        .And(Narrative.Exec("email is validate", 
                 () => Assert.IsFalse(new User { Email = "bad_email" }.SetUp().IsOKToAccept())))

        .And(Narrative.Exec("password is correct length and matches confirm", 
                 () => Assert.IsFalse(new User { Password = "one_version", Confirm = "differnt"}.SetUp().IsOKToAccept())))

        .And(Narrative.Exec("address is valid", 
                  () => Assert.IsFalse(new User { Address = new Address{Street = null}}.SetUp().IsOKToAccept())))
        
        .Then(Narrative.Text("I can log now login"))
        .And(Narrative.Text("I am sent a confirmation email"));

    story.Assert();
}

Working with BDD around domain models and validations makes sense to me. I think that it is a good way to report validations back up to the client and the team. There is also a good delineation between the “valid” model (“Given” section) and “invalid” (“When” section) edge cases in the structure. In this case, it also demonstrates that those models so far do nothing because there are no tests in the “Then” section.

Some quick conclusions

  1. Syntactic sugar of new constructor methods in C# 3.5 avoids the need for “with” methods found the Test Data Builder pattern
  2. Extensions may better replace the Build method in the builder
  3. Object mothers are still helpful to retain separation of concerns (eg data from construction)
  4. Go BDD ;-)

Well that’s about it for now. Here’s a download of the full code sample in VS2008.

Postscript: About Naming Conventions

How do I go about naming my object mother? I have lots of options: Test, Fake, ObjectMother, OM, Dummy. Of course, if I was purist it would be ObjectMother. But that doesn’t sit well with me when explaining it others. Although it does make explicit the pattern being used, I find fake most useful (eg FakeUser). It rolls off the tongue, it is self evident enough to outsiders as a strategy. Test (eg TestUser), on the other hand, is generic enough that I have to remind myself of its purpose and then I find I always need to do quick translations from the tests themselves. For example, with UserTest and TestUser I have read them to check which I am working with. For these samples, I have used TestUser. If you come back to my production code you will find FakeUser. I hope that doesn’t prove problematic in the long run as it isn’t really a fake.