Archive

Posts Tagged ‘SharePoint’

Sharepoint & TDD: Getting started advice

July 1st, 2011 4 comments

I have a couple of people asking lately about starting on SharePoint. They’ve asked about how to move forward with unit and integration testing and stability. No one wants to go down the mocking route (typemock, pex and moles) and quite rightly. So here’s my road map:

The foundations: Hello World scripted and deployable without the GUI

  1. Get a Hello World SharePoint “app” – something that is packageable and deployable as a WSP
  2. Restructure the folders of the code away from the Microsoft project structure so that the root folder has src/, tools/, lib/ and scripts/ folders. All source and tests are in src/ folder. This lays the foundation for a layered code base. The layout looks like this sample application
  3. Make the compilation, packaging, installation (and configuration) all scripted. Learn to use psake for your build scripts and powershell more generally (particularly against the SharePoint 2010 API). The goal here is that devs can build and deploy through the command line. As such, so too can the build server. I have a suggestion here that still stands but I need to blog on improvements. Most notably, not splitting out tasks but rather keeping them in the same default.ps (because -docs works best). Rather than get reuse at the task level do it as functions (or cmdlets). Also, I am now moving away from the mix with msbuild that I blogged here and am moving them into powershell. There is no real advantage other than less files and reduced mix of techniques (and lib inclusions).
  4. Create a build server and link this build and deployment to it. I have been using TFS and TeamCity. I recommend TeamCity but TFS will suffice. If you haven’t created Build Definitions in TFS Workflow allow days-to-weeks to learn it. In the end, but only in the end, it is simple. Becareful with TFS, the paradigm here is that build server does tasks that devs don’t. It looks a nice approach. I don’t recommend it and there is nothing here by design that makes this inevitable. In TFS, you are going to need to build two build definitions: SharePointBuild.xaml and SharePointDeploy.xaml. The build is a compile, package and test. The deploy simply deploys to an environment – Dev, Test, Pre-prod and Prod. The challenge here is to work out a method for deploying into environments. In the end, I wrote a simple self-host windows workflow (xamlx) that did the deploying. Again, I haven’t had time to blog the sample. Alternatively, you can use psexec. The key is that for a SharePoint deployment you must be running on the local box and the most configurations have a specific service account for perms. So I run a service for deployment that runs under that service account.

Now that you can reliably and repeatably test and deploy, you are ready to write code!

Walking Skeleton

Next is to start writing code based on a layered strategy. What we have found is that we need to do two important things: (1) always keep our tests running on the build server and (2) attend to keeping the tests running quickly. This is difficult in SharePoint because a lot of code relates to integration and system tests (as defined by test automation pyramid). We find that integration tests that require setup/teardown of a site/features get brittle and slow very quickly. In this case, reduce setup and teardown in the the system tests. However, I am also had a case where the integration test showed that a redesigned object (that facaded SharePoint) would give better testability for little extra work.

  1. Create 6 more projects based on a DDD structure (Domain, Infrastructure, Application, Tests.Unit, Tests.Integration & Tests.System). Also rename your SharePoint project to UI-[Your App], this avoids naming conflicts on a SharePoint installation. We want to create a port-and-adapters application around SharePoint. For example, we can wrap property bags with repository pattern. This means that we create domain models (in Domain) and return them with repositories (in Infrastructure) and can test with integration tests.
  2. System tests: I have used StoryQ with the team to write tests because it allows for a setup/teardown and then multiple test scenario. I could use SpecFlow or nBehave just as easily.
  3. Integration tests: these are written classical TDD style.
  4. Unit tests: these are written also classical TDD/BDD style
  5. Javascript tests: we write all javascript code using a jQuery plugin style (aka Object Literal) – in this case, we use JSSpec (but I would now use Jasmine) – we put all tests in Tests.Unit but the actual javascript is still in the UI-SharePoint project. You will need two sorts of tests: Example for exploratory testing and Specs for the jasmine specs. I haven’t blogged about this and need to but is based on my work for writing jQuery plugins with tests.
  6. Deployment tests: these are tests that run once that application is deployed. You can go to an ATOM feed which returns the results of a series of tests that run against the current system. For example, we have the standard set with tells us the binary versions and which migrations (see below) have been applied. Others check whether a certain wsp has been deployed, different endpoints are listening, etc. I haven’t blogged this code and mean to – this has been great for testers to see if the current system is running as expected. We also get the build server to pass/fail a build based on these results.

We don’t use Pex and Moles. We use exploratory testings to ensure that something actually works on the page

Other bits you’ll need to sort out

  • Migrations: if you have manual configurations for each environment then you’ll want to script/automate this. Otherwise, you aren’t going to be one-click deployments. Furthermore, you’ll need to assume that each environment is in a different state/version. We use migratordotnet with a SharePoint adapter that I wrote – it is here for SharePoint 2010 – there is also a powershell runner in the source to adapt – you’ll need to download the source and compile. Migrations as an approach works extremely well for feature activation and publishing.
  • Application Configuration: we use domain models for configuration and then instantiate via an infrastructure factory – certain configs require SharePoint knowledge
  • Logging: you’ll need to sort of that Service Locator because in tests you’ll swap it out for Console.Logger
  • WebParts: can’t be in a strongly typed binary (we found we needed another project!)
  • Extension Methods to Wrap SharePoint API: we also found that we wrapped a lot of SharePoint material with extension methods

Other advice: stay simple

For SharePoint developers not used to object oriented programming, I would stay simple. In this case, I wouldn’t create code with abstractions that allowed you to unit test like this. I found in the end the complexity and testability outweighed the simplicity and maintainability.

Microsoft itself has recommended the Repository Pattern to facade the SharePoint API (sorry I can’t for the life of me find the link). This has been effective. It is so effective we have found that we can facade most SharePoint calls in two ways: a repository that returns/works with a domain concept or a Configurator (which has the single public method Process()).Anymore than that it was really working against the grain. All cool, very possible but not very desirable for a team which rotates people.

Repository Pattern and SharePoint to facade PropertyBag

May 23rd, 2011 No comments

Introduction

Microsoft Patterns and Practices recommend the facading of SharePoint with the repository pattern. If you are an object-oriented programmer that request is straightforward. If your not then it isn’t. There are few examples of this practice and most code samples in SharePoint work directly with the API and SharePoint is scattered throughout the entire code base. If you haven’t read much about this there is a good section in Freeman and Pryce (Growing Object Oriented Software: Guide by Tests) about this approach – they relate this approach back to Cockburn’s ports and adapters and Evans’ Anti-corruption layer. I personally think about it as an anti-corruption layer.

In this example, I will give two examples of how we will avoid SharePoint having too much reach into codebase when using Properties. If we were to not use this solution the case base would be very EASY. Whenever we want a value we would use this code snippet: SPFarm.Local.Properties[key].ToString() (with some Security.RunWithElevatedPrivileges). Using this approach, at best we are likely to see the key as a global constant in some register of keys.

This type of code does not fit the Freeman and Pryce mantra to prefer to write maintainable code over code that is easy to write. Maintainable code has separation of concerns, abstractions and encapsulation – this is also testable code. So in the end in this example, what you’ll see is a lot more code but what’ll you also hopefully appreciate is that we are teasing out domain concepts where SharePoint happens only to the technical implementation.

So, the quick problem domain. We have two simple concepts: a site location and an environment. We have decided that our solution requires both of these pieces of information to be stored in SharePoint. In this case, we have further decided (rightly or wrongly – possibly wrongly) that we are going to let a little bit of SharePoint leak in that both a site location and environment as really property bag values – we make this decision because the current developers think it is easier in the long run. So, we decided against the EASY option.

Easy Option

Create a register:

public class EnvironmentKeys {
  public const string SiteLocationKey = "SiteUrl";
  public const string EnvironmentKey = "Environment";
}

Access it anytime either get:

  var siteUrl = SPFarm.Local.Properties[SiteLocationKey] 

Or update:

  SPFarm.Local.Properties[SiteLocationKey] = "http://newlocation/";
  SPFarm.Local.Update();  // don't worry about privileges as yet

Maintainable option

We are going to create two domain concepts: SiteLocation and Environment both of which are PropertyBagItem and that it will be fronted by a PropertyBagRepository that will allow us to Find or Save. Note: we’ve decided to be a little technology bound because we are using the notion of a property bag when we could just front each domain concept with respository. We can always refactor later – the other agenda here is getting SharePoint devs exposure to writing code using generics.

Here are our domain concepts.

Let’s start with our property bag item contract:

public abstract PropertyBagItem
{
  abstract string Key { get; }
  abstract string Value { get; set; }
}

It has two obvious parts: key and value. Most important here is that we don’t orphan the key from the domain concept. This allows us to avoid the problem of a global register of keys.

And let’s have a new SiteLocation class.

public class SiteLocation : PropertyBagItem
{
  public string Key { get { return "SiteLocationKey"; } }
  public string Value { get; set; }
}

Now, let’s write a test for finding and saving a SiteLocation. This is a pretty ugly test because it requires one being set up. Let’s live with it for this sample.

[TestFixture]

public class PropertyBagItemRepositoryTest
{
  private PropertyBagItemRepository _repos;

  [SetUp]
  public void Setup()
  {
    _repos = new PropertyBagItemRepository();
    _repos.Save(new SiteLocation("http://mysites-test/"));
  }

  [Test]
  public void CanFind()
  {
    Assert.That(_repos.Find<SiteLocation>().Value, Is.EqualTo("http://mysites-test/"));
  }

} 

Now, we’ll look at a possible implementation:

public class PropertyBagItemRepository
{
  private readonly Logger _logger = Logger.Get();

  public T Find<T>() where T : PropertyBagItem, new()
  {
    var property = new T();
    _logger.TraceToDeveloper("PropertyBagItemRepository: Finding key: {0}", property.Key);
    return Security.RunWithElevatedPrivileges(() =>
        {
          if (SPFarm.Local.Properties.ContainsKey(property.Key))
          {
            property.Value = SPFarm.Local.Properties[property.Key].ToString();
            _logger.TraceToDeveloper("PropertyBagItemRepository: Found key with property {0}", property.Value);
          }
          _logger.TraceToDeveloper("PropertyBagItemRepository: Unable to find key: {0}", property.Key);
        return property;
        });
  }
}

That should work and we could then add more tests and an implementation for the Save which might look like – I prefer chaining so I return @T@:

public T Save<T>(T property) where T : PropertyBagItem
{
  _logger.TraceToDeveloper("PropertyBagValueRepository: Save key: {0}", key);
  Security.RunWithElevatedPrivileges(() =>
  {
    SPFarm.Local.Properties[key] = property.Value;
    SPFarm.Local.Update();
  });
  return property;
}

Finally, let’s look at our next domain concept the environment. In this case, we want to enumerate all environments. So, we’ll write our integration test (yes, we should have a unit test for this domain concept first):

[Test]
public void CanFindEnvironment()
{
  Assert.That(new PropertyBagItemRepository().Find<Environment>().Code, Is.EqualTo(Environment.EnvironmentCode.DEVINT));
}

And now we can see that the implementation is a little more complex than the SiteLocation but that we can encapsulate the details well enough – actually, there is some dodgy code but the point is to illustrate that we need to keep environment logic, parsing and checking altogether:

public class Environment : PropertyBagItem
{
  public enum EnvironmentCode { PROD, PREPROD, TEST, DEV }

  public string Key { get { return "EnvironmentKey" } }
  private EnvironmentCode _code;
  public string Value 
  { 
    get { return Enum.GetName(typeof (EnvironmentCode), _code); }
    set { _code = value; }
  }

  
  public Environment(EnvironmentCode code)
  {
    Code = code;
  }

  public Environment(string code)
  {
    Code = Parse(code);
  }
  
  public Environment() : this(EnvironmentCode.DEV) // new() constraint on Find<T> requires parameterless constructor
  {
  }

  public static EnvironmentCode Parse(string property)
  {
    try
    {
      return (EnvironmentCode)Enum.Parse(typeof(EnvironmentCode), property, true);
    }
    catch (Exception)
    { 
      return EnvironmentCode.DEV;
    }
  }
}

It wasn’t that much work really was it?

Categories: Uncategorized Tags: , ,

SharePoint TDD Series: Maintainability over Ease

December 16th, 2010 No comments

This series is part of my wider initiative around the Test Automation Pyramid. Previously I have written around Asp.Net MVC. This series will outline a layered code and test strategy in SharePoint.

SharePoint is a large and powerful system. It can cause problems in the enterprise environment incurring delays, cost and general frustration. Below is an overview of the main areas of innovation made in the source code to mitigate these problems. These problems are because the fundamental design of SharePoint is to design an “easy” system to code. It is easy in this sense is a system that can be configured up by general pool of developers and non-developers a like. Such a design however does not necessarily make the system maintainable. Extension, testability and stability may all suffer. In enterprise environments these last qualities are equally if not more important to the long-term value of software.

This series of posts outlines both the code used and the reasons behind it usage. As such it is a work in progress that will need to be referred to and updated as the code base itself changes.

Deployment Changes

Layered Code with testing

Testing on Event Receiver via declarative attributes

Testing Delegate controls which deploy jQuery

  • Part 5 – Client side strategies for javascript
  • Part 6 – Unit testing the jQuery client-side code without deploying to SharePoint
  • Part 2 – Unit testing the delegate control that houses the jQuery
  • Part 4 – Exploratory testing without automation is probably good enough

Cross-cutting concerns abstractions

Test Strategy in SharePoint: Part 4 – Event Receiver as layered Feature

December 12th, 2010 No comments

In Test Strategy in SharePoint: Part 3 – Event Receiver as procedural, untestable feature we came up with some nice code that we believe to be nicer – the code was going to be layered and more testable. This entry will look at the tests and the code to make that code come alive.

The starting design – which may get tweaked a little as we go. In practice, we started with this design in mind (ie use attributes to declarative decide on what to provision) but refactored the original code test-first trying to work out what was unit vs integration testable and what code was in the domain, infrastructure and ui layers (as based on good layering to aid testability). We won’t take you through that process but it did only take a few hours to knock out the first attempt.

Here we’ll take you through the creation of one type of the page the NewPage. We have put the others there to show that we made the design because we are going to require many types of pages and we are hoping that the benefit of an attribute and its declarative style will payoff against its cost. We are looking for accessibility and maintainability as we bring on new developers – or come back to it ourselves in a couple of weeks!

using System.Runtime.InteropServices;
using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Infrastructure;
using YourCompany.SharePoint.Infrastructure.Configuration;
using Microsoft.Office.Server.UserProfiles;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;

namespace YourCompany.SharePoint.MySites.Features.WebFeatureProvisioner
{
    [Guid("cb9c03cd-6349-4a1c-8872-1b5032932a04")]
    public class SiteFeatureEventReceiver : SPFeatureReceiver
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        [ActivateFeature(PackageCfg.PublishingWebFeature)]
        [RemovePage("Pages/default.aspx")]
        [MasterPage("CustomV4.master", MasterPage.MasterPageType.User)]
        [MasterPage("CustomMySite.master", MasterPage.MasterPageType.Host)]
        public override void FeatureActivated(SPFeatureReceiverProperties properties)
        {
            PersonalSiteProvisioner
              .Create(properties.Feature.Parent as SPWeb)
              .Process();
        }
    }
} 

Overview of strategy

I want to drive as much testing back into unit tests and the rest can go to integration testing. However, there is another issue here. As a team member, I actually want to have a record of the operational part of the tests because this is about installation/deployment and it is at the stage of provisioning/activation. So what we’ll need to do is write a BDD-style acceptance test to tease out the feature activation process too. Thus:

  • the acceptance test will have the ability to actually activate a feature
  • the unit test should help specify any specifics of this activation – which is the abstraction mechanism we will use to get code abstractions
  • the integration test be any tests proving specific API calls

System Acceptance test

Before we try and drive out a design, let’s understand what needs to be done. To write this acceptance test requires a good knowledge of SharePoint provisioning – so these are technically-focussed acceptance tests rather than business one’s.

We will write a system test Acceptance\Provisioning\ (we will implement this in StoryQ later on)

Story is Solution Deployment

In order to create new pages for user
As a user
I want a 'MySites' available

With scenario have a new feature
  Given I have a new wsp package mysites.wsp
  When site is deployed
    And I am on site http://mysites/personal
  Then Publishing Site Feature is site activated

Design

We now start to drive out some code concepts from tests. I think that our TODO list is something like this:

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Unit test: have an attribute

Because this is the first test, I will add in the namespace that we are driving out code from the domain and in the unit test project. This code is creating an attribute that we can to represent a new page. This is straightforward code that we want to be able to simple say what names of the pages are that we want to be in the new page.

namespace Test.Unit.Provisioning
{
    [TestFixture]
    public class SiteProvisioningAttributesTests
    {
        [Test]
        public void ProvisioningHasHomePage()
        {
            Assert.IsTrue(typeof(TestClass).GetMethod("OneNewPage").GetCustomAttributes(typeof(NewPageAttribute), false).Count() == 1);
        }

        [Test]
        public void CanReturnPageValues()
        {
            var page = ((IProvisioningAttribute)typeof(TestClass).GetMethod("OneNewPage").GetCustomAttributes(typeof(NewPageAttribute), false)).Page;
            Assert.AreEqual("Home.aspx", page.Name);
        }

        public class TestClass
        {
            [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
            public void OneNewPage()
            {

            }
        }
    }
} 

Now the skeleton code for the attribute – it will need to do provisioning later on but let’s leave that for now. At this stage, we are just going to make the attribute be able to return the value object of a page:

using System;
using YourCompany.SharePoint.Domain.Model;
using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Domain.Services.Provisioning;

namespace YourCompany.SharePoint.Infrastructure.Provisioning
{
    [Serializable, AttributeUsage(AttributeTargets.Method, Inherited = false, AllowMultiple = true)]
    public class NewPageAttribute : Attribute, IProvisioningAttribute
    {
        public Page Page { get; private set; }
        public NewPageAttribute(string name, string title, string pageLayout)
        {
            Page = new Page(name, title, pageLayout);
        }
    }
} 
public struct Page
{
    public string Name { get; private set; }
    public string Title { get; private set; }
    public string PageLayout { get; private set; }
    
    public Page(string name, string title, string pageLayout)
        : this()
    {
        Name = name;
        Title = title;
        PageLayout = pageLayout;
    }
}s
public interface IProvisioningAttribute
{
    Page Page { get; }
}

Unit test: be able to read the attributes from a class

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Now that we have an attribute designed that can return the values for a page, now we need to create a parser across the class that can return the attributes.


[TestFixture]
public class SiteProvisioningAttributesTests
{
    [Test]
    public void NewProvisioningCountIsOneForNewPageMethod()
    {
        var pages = AttributeParser.Parse<NewPageAttribute>(typeof(TestClass), "OneNewPage");
        Assert.IsTrue(pages.Count() == 1);
    }

    [Test]
    public void NewProvisioningCountIsTwoForNewPageMethod()
    {
        var pages = AttributeParser.Parse<NewPageAttribute>(Type, "TwoNewPages");
        Assert.IsTrue(pages.Count() == 2);
    }

    public class TestClass
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        public void OneNewPage()
        {
        }

        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        [NewPage("Home2.aspx", "Home2 Page", "HomePage2.aspx")]
        public void TwoNewPages()
        {
        }
    }
}

With code something like this we are going to get all the New pages.

public class AttributeParser
{
    public static IEnumerable<Page> Parse<T>(Type type, string method)
        where T : IProvisioningAttribute
    {
            return type.GetMethod(method).GetCustomAttributes(typeof(T), false)
                .Select(attribute => ((T)attribute).Page));
    }
} 

Now that we are return a page by iterating over the the method, I can now see that we don’t want a page per se but rather a specific type of publisher.

Unit tests: return a publisher for a page rather than a page

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Returning a publisher is going to be refactor as we are adding a new concept through the attribute. Let’s rewrite the test that our parser is going to instead return an explicit IPagePublisher rather than an implicit Page.

[TestFixture]
public class SiteProvisioningAttributesTests
{
    [Test]
    public void NewProvisioningCountIsOneForNewPageMethod()
    {
        var pages = AttributeParser.Parse<IPagePublisher, NewPageAttribute>(typeof(TestClass), "OneNewPage");
        Assert.IsTrue(pages.Count() == 1);
    }

    public class TestClass
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        public void OneNewPage()
        {
        }
    }
}

As we unpick this small change, we get the introduction of a new interface IPagePublisher. In practice, I add an empty interface but here I want to show that we introduce the concept of publishing without actual implementations – this is the benefit that we are looking for. In summary, I give the activator a “new page” with details and it provides back to me a publisher that knows how to deal with this information. That sounds fair to me.

public interface IPagePublisher
{
    void Add();
    void Delete();
    void CheckIn();
    void CheckOut();
    void Publish();
    bool IsPublished();
    bool IsProvisioned();
} 

So now our AttributeParser becomes:

public class AttributeParser
{
    public static IEnumerable<T> Parse<T, T1>(Type type, string method)
        where T : IPagePublisher
        where T1 : IProvisioningAttribute
    {
        return type.GetMethod(method).GetCustomAttributes(typeof(T1), false)
            .Select(attribute => ((T1)attribute).Publisher(((T1)attribute).Page))
            .Cast<T>();
    }
} 

Which requires an change to our interface. This may change require a little bit of explanation for some. Why the Func? Bascially, we can have a property that accepts parameters and we can swap out its implementation as needed (for testing).

public interface IProvisioningAttribute
{
    Func<Page, IPagePublisher> Publisher { get; }
    Page Page { get; }
}

So the concrete implementation now becomes:

[Serializable, AttributeUsage(AttributeTargets.Method, Inherited = false, AllowMultiple = true)]
public class NewPageAttribute : Attribute, IProvisioningAttribute
{
    public Func<IPage, IPagePublisher> Publisher
    {
        get { return new PagePublisher(Page); }
    }

    public IPage Page { get; private set; }
    public NewPageAttribute(string name, string title, string pageLayout)
    {
        Page = new Page(name, title, pageLayout);
    }
} 

Wow, all we did was add IPagePublisher to AttributeParser.Parse<IPagePublisher, NewPageAttribute>(typeof(TestClass), "OneNewPage"); and we got all that code!

Unit tests: create a service for processing a publisher

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

So far, we are able to make a method with attributes for each page. When we parse these attributes, we get back the publishers with the page. Now we need to be able to process a publisher. We are going to move into mocking out a publisher to check that the publisher is called in the service. This service is a provisioner and it will be provisioning the personal site. So hopefully calling it the PersonalSiteProvisioner makes sense.

Let’s look at the code below. We are creating a new personal site provisioner and inside this we want to ensure that the page publisher actually adds a page. Note: we are deferring how the page is actually added. We just want to know that we are calling add. (Note: we using Moq as the isolation framework.)

using Moq;

[TestFixture]
public class SiteProvisioningTest
{
    [Test]
    public void CanAddPage()
    {
        var page = new Mock<IPagePublisher>();
        var provisioner = new PersonalSiteProvisioner(page.Object);

        provisioner.Process();

        page.Verify(x => x.Add());
    }
 }

So here’s the code to satisfy the test:

public interface IProvisioner
{
    void Process();
}

With the implementation:

namespace YourCompany.SharePoint.Domain.Services.Provisioning
{
    public class PersonalSiteProvisioner : IProvisioner
    {
        public List<IPagePublisher> PagePublishers { get; private set; }

          public PersonalSiteProvisioner(List<IPagePublisher> publishers)
          {
              PagePublishers = publishers;
          }
          public PersonalSiteProvisioner(IPagePublisher publisher) 
            : this(new List<IPagePublisher>{publisher})
          {
          }

        public void Process()
        {
            PagePublishers.TryForEach(x =>x.Add());
        }
    }
} 

Right, that all looks good. We can Process some publishers in our provisioner. Here’s some of the beauty (IMHO) that we can add to these tests without going near an integration point. Let’s add few more tests like adding multiple pages, adding and deleting, ensuring that there is error handling and then trying different combinations of adding and deleting and finally that if one adding errors that we can still process others in the list. Take a look at the tests (p.s the implementation in the end is easy once we had tests but it took an hour or so).

Unit tests: really writing the provisioner across most cases

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class
[TestFixture]
public class SiteProvisioningTest
{

    [Test]
    public void CanAddMultiplePages()
    {
        var page = new Mock<IPagePublisher>();
        var provisioner = new PersonalSiteProvisioner(new List<IPagePublisher> { page.Object, page.Object });

        provisioner.Process();

        page.Verify(x => x.Add(), Times.Exactly(2));
    }

    [Test]
    public void CanRemovePage()
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.IsPublished()).Returns(true);

        var provisioner = new PersonalSiteProvisioner(page.Object);

        provisioner.Process();

        page.Verify(x => x.Delete());
    }

    [Test]
    public void RemovingPageThatDoesntExistDegradesNicely()
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.IsPublished()).Returns(true);
        page.Setup(x => x.Delete()).Throws(new Exception());

        var provisioner = new PersonalSiteProvisioner(page.Object);

        provisioner.Process();

        page.Verify(x => x.Delete());
    }

    [Test]
    [Sequential]
    public void SecondItemIsProcessedWhenFirstItemThrowsException(
        [Values(false, true)] bool delete,
        [Values(true, false)] bool add)
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.IsPublished()).ReturnsInOrder(
            () => delete,
            () => add);
        page.Setup(x => x.Delete()).Callback(() => { throw new Exception(); });

        var provisioner = new PersonalSiteProvisioner(new List<IPagePublisher> { page.Object, page.Object, page.Object });
        provisioner.Process();

        page.Verify(x => x.Add(), Times.Once());
        page.Verify(x => x.Delete(), Times.Once());
    }

    [Test]
    public void AlreadyInstalledPagesWontReinstallCatchesException()
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.Add()).Throws(new Exception());

        var provisioner = new PersonalSiteProvisioner(page.Object);
        provisioner.Process();

        page.Verify(x => x.Add(), Times.Once());
    }
}

and the implementation is below. Just a couple of notes: this class should have logging in it and that this should also be interact tested (this is the key point in the system that we need in the logs) and also there is a helper wrapper TryForEach which is a simple wrapper around the Linq ForEach that null checks. Believe it or not, the tests above actually drove out a lot of errors even in this small piece of code because it had to deal with list processing. We now don’t have to deal with these issues at integration (and particularly in production).

namespace YourCompany.SharePoint.Domain.Services.Provisioning
{
    public class PersonalSiteProvisioner : IProvisioner
    {
        public List<IPagePublisher> PagePublishers { get; private set; }

        public PersonalSiteProvisioner(List<IPagePublisher> publishers)
        {
            PagePublishers = publishers;
        }
        public PersonalSiteProvisioner(IPagePublisher publisher) 
          : this(new List<IPagePublisher>{publisher})
        {
        }

        public void Process()
        {
            PagePublishers.TryForEach(x =>
            {
                if (x.IsPublished()) {
                    x.Delete();
                } else {
                    x.Add();
                }
            });
        }
    }
} 

Now we are ready to do something with SharePoint.

Integration tests: publish pages in the context of SharePoint

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Now we have have to deal with SharePoint. This means that we are going to an integration test. In terms of the classes, are now ready to implement the PagePublisher. Let’s write a test. The basic design is that we can we will hand in the publishing web context (and page) and then add it. (Note: we could have handed in only the web context in the constructor and then the page dependency in the method – this looks better in hindsight). The code then asserts that the item exists. Now, those of you familiar with the SharePoint API know that neither the Site.OpenPublishingWeb or SiteExpectations.ExpectListItem calls exist. These are wrappers to make the code more readable. We’ll include those below.

What you need to be seeing is that there no references to SharePoint in the Domain code – which in the flip-side is that the Infrastructure code is the place where these live.

using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Infrastructure;
using Microsoft.SharePoint.Publishing;
using NUnit.Framework;
using Test.Unit;

namespace Test.Integration.Provisioning
{
    [TestFixture]
    public class AddPagePublisherTest
    {
        [Test]
        public void CanAddPage()
        {
            var page = new Page("Home.aspx", "Home Page", "HomePage.aspx");   
            Site.OpenWeb("http://mysites/", x => new PagePublisher(x.Web, page).Add());
            SiteExpectations.ExpectListItem("http://mysites/", x => x.Name == "Home.aspx");
        }
    }
}

Additional wrapper code – one in Infrastructure and the other in the Test.Integration project – these both hide the repetitive complexity of SharePoint.

namespace YourCompany.SharePoint.Infrastructure
{
    public static class Site
    {
        public static void OpenWeb(string url, Action<SPWeb> action)
        {
            using (var site = new SPSite(url))
            using (var web = site.OpenWeb())
            {
                action(web);
            }
        }
        public static void OpenPublishingWeb(string url, Action<PublishingWeb> action)
        {
            using (var site = new SPSite(url))
            using (var web = site.OpenWeb())
            {
                action(PublishingWeb.GetPublishingWeb(web));
            }
        }
    }
}

And the test assertion:

namespace Test.Integration
{
   public class SiteExpectations
   {
       public static void ExpectListItem(string url, Func<SPListItem, bool> action)
       {
           Site.OpenPublishingWeb(url, x => Assert.IsTrue(x.PagesList.Items.Cast<SPListItem>().Where(action).Any()));
       }
   }
}

So now let’s look at (some of) the code to implement Add. Sorry, it’s a bit long but should be easy to read. Our page publisher code doesn’t look a lot different from the original code we found in the blog Customising SharePoint 2010 MySites with Publishing Sites in in the SetupMySite code. It is perhaps a little clearer because there are a few extra private methods that help describe the process that SharePoint requires when creating a new page. But the other code would have got there eventually.

The key difference is how we get there. The correctness of this code was confirmed by the integration test. In fact, we had to run this code tens (or perhaps one hundred) times to iron the idiosyncrasies of the SharePoint API - particularly the case around DoUnsafeUpdates. Yet, we didn’t have to deploy the entire solution each time. And no way do we have to debug by attaching a process. There were some times we were at a loss and did resort to the debugger but we were able to get context in Debug in the test runner. All of this has lead to increases in speed and stability.

There’s a final win in design. This code has one responsibility: to add a page. No more, no less. When we come to add multiple pages we don’t change this code. If we come to add another type of page – perhaps we don’t change this code either but create another type of publisher. Later on we can work out whether these publishers have shared, base code. A decision we can defer until refactoring.

using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;

namespace YourCompany.SharePoint.Infrastructure.Provisioning
{
    public class PagePublisher : IPagePublisher
    {
        public const string SiteCreated = "MySiteCreated";

        private readonly Page _page;
        private readonly SPWeb _context;
        private PublishingWeb _pubWeb;

        public PagePublisher(SPWeb web, IPage publishedPage)
        {
            _page = publishedPage;
            _context = web;
        }

        public void Add()
        {
           using (var web = _site_context.OpenWeb()) {
             _pubWeb = web;
             if (_page.Equals(null) && !IsProvisioned()) return;

             DisableVersioning();

             if (!HomePageExists)
                 CreatePage();

             AddAsDefault();
             Commit();

             if (!IsProvisioned())
                 DoUnsafeUpdates(x =>
                 {
                     x.Properties.Add(SiteCreated, "true");
                     x.Properties.Update();
                 });
           }
        }

        public bool IsProvisioned()
        {
            return _context.Properties.ContainsKey(SiteCreated);
        }

        private void AddAsDefault()
        {
            _pubWeb.SetDefaultPage(HomePage.ListItem.File);
        }

        private void Commit()
        {
            _pubWeb.Update();
        }

        private bool HomePageExists
        {
            get { return _pubWeb.HasPublishingPage(_page.Name);  }
        }

        private void DoUnsafeUpdates(Action<SPWeb> action)
        {
            var currentState = _context.AllowUnsafeUpdates;
            _context.AllowUnsafeUpdates = true;
            action(_context);
            _context.AllowUnsafeUpdates = currentState;
        }

        private void DisableVersioning()
        {
            var pages = _pubWeb.PagesList;
            pages.EnableVersioning = false;
            pages.ForceCheckout = false;
            pages.Update();
        }

        private void CreatePage()
        {
            var layout = _pubWeb.GetAvailablePageLayouts()
                .Where(p => p.Name == _page.PageLayout)
                .SingleOrDefault();

            var page = _pubWeb.GetPublishingPages().Add(_page.Name, layout);
            page.Title = _page.Title;
            page.Update();
        }
    }
}

You should note that our tests against SharePoint are actually small and neat. In this case, it tests with one dependency (SharePoint) and one interaction (Add). The tests should be that simple. Setup and teardown are where it gets a little harder. Below requires a Setup which swaps out the original page so that the new one can be added and that we know that it is different. Teardown cleans up the temp publishing page. Note: at this stage the code in the Setup has not been abstracted into its own helper – we could do this later.

namespace Test.Integration.Provisioning
{
    [TestFixture]
    public class AddPagePublisherTest
    {
        private Page _publishedPage;
        private const readonly string HomeAspx = "http://mysites/";
        readonly string _homePage = a.TestUser.HomePage;
        private PublishingPage _tempPublishingPage;

        [SetUp]
        public void SetUp()
        {
            _publishedPage = new Page("Home.aspx", "Home Page", "HomePage.aspx");

            Site.OpenPublishingWeb(_homePage, x =>
            {
                var homePageLayout = x.GetPublishingPageLayout(_publishedPage.PageLayout);
                _tempPublishingPage = x.AddPublishingPage("TempPage.aspx", homePageLayout);
                x.SetDefaultPage(_tempPublishingPage.ListItem.File);
                x.Update();
                x.DeletePublishingPage(_publishedPage.Name);
            });
        }

        [Test]
        public void CanAddPage()
        {
            Site.OpenWeb(_homePage, x => new PagePublisher(x.Web, _publishedPage).Add());
            SiteExpectations.ExpectListItem(_homePage, x => x.Name == HomeAspx);
        }

        [Teardown]
        public void Teardown()
        {
            Site.OpenWeb(_homePage, x => x.DeletePublishingPage(_tempPublishingPage.Name));
        }
    }
} 

System: check that it all works in SharePoint when deployed

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

The final step in this cadence to create system tests by completing the acceptance tests. Theoretically, this step should not be needed because the code talks to SharePoint. In practice, this step finds problems and cleans up code at the same time. Let’s return to the test that we originally wrote and have been delivering to – but not coding against. We are now going to implement system test using test-last development. Here is the story:

Story is Solution Deployment

In order to create new pages for user
As a user
I want a MySites available

With scenario have a new feature
  Given I have a new wsp package mysites.wsp
  When site is deployed
    And I am on site http://mysites/personal/45678
  Then Publishing Site Feature F6924D36-2FA8-4f0b-B16D-06B7250180FA is site activated

You may notice in the code that above, we have actually added some more information from the original story. We now know that the personal site has an Id in it (http://mysites/personal/45678) and we also know the GUID of the site (F6924D36-2FA8-4f0b-B16D-06B7250180FA). Adding, changing and morphing system tests often happens, particularly as we learn more about our implementation – in our case, we have to run the analysts/product owner through these changes!

Now we need to find a library to provide an implementation. There are number of libraries: Cucumber, nBehave or StoryQ. We have chosen StoryQ. StoryQ has a GUI converter that can take the text input above and turn it into C# skeleton code as per below and then we fill out the code (Cucumber keeps the story separate from implementation). StoryQ will then output the results of the tests in the test runner.

  using StoryQ;
  using NUnit.Framework;

  namespace Test.System.Acceptance.Provisioning
  {
      [TestFixture]
      public class SolutionDeploymentTest
      {
          [Test]
          public void SolutionDeployment()
          {
              new Story("Solution Deployment")
                  .InOrderTo("create new pages for users")
                  .AsA("user")
                  .IWant("a MySites available")

                  .WithScenario("have a new feature")
                    .Given(IHaveANewWspPackage_, "mysites.wsp")
                    .When(SiteIsDeployed)
                      .And(IAmOnSite_, "http://mysites/personal/684945")
                    .Then(__IsSiteActivated, "Publishing Site Feature", "F6924D36-2FA8-4f0b-B16D-06B7250180FA")
                 
                  .Execute();
          }

          private void IHaveANewWspPackage_(string wsp)
          {
              throw new NotImplementedException();
          }

          private void SiteIsDeployed()
          {
              throw new NotImplementedException();
          }

          private void IAmOnSite_(string site)
          {
              throw new NotImplementedException();
          }    

          private void __IsSiteActivated(string title, string guid)
          {
              throw new NotImplementedException();
          }
      }
  }

Our implementation of the system is surprisingly simple. Because we are provisionin new features, the main test is that the deployment sequence has worked. So we are going to make two checks (once the code is deployed), is the solution deployed? is the feature activated? To do this we are going to need to write a couple of abstractions around the SharePoint API: Solution.IsDeployed & Feature.IsSiteActivated. We do this because we want the system tests to remain extremely clean.

  using StoryQ;
  using NUnit.Framework;

  namespace Test.System.Acceptance.Provisioning
  {
      [TestFixture]
      public class SolutionDeploymentTest
      {
          [Test]
          public void SolutionDeployment()
          {
              new Story("Solution Deployment")
                  .InOrderTo("create new pages for users")
                  .AsA("user")
                  .IWant("a MySites available")

                  .WithScenario("have a new feature")
                    .Given(IHaveANewWspPackage_, "mysites.wsp")
                    .When(SiteIsDeployed)
                      .And(IAmOnSite_, "http://mysites/personal/684945")
                    .Then(__IsSiteActivated, "Publishing Site Feature", "F6924D36-2FA8-4f0b-B16D-06B7250180FA")
                 
                  .Execute();
          }

          private string _site;
          private string _wsp;

          private void IHaveANewWspPackage_(string wsp)
          {
              _wsp = wsp;
          }

          private void SiteIsDeployed()
          {
              Assert.IsTrue(Solution.IsDeployed(_wsp));
          }

          private void IAmOnSite_(string site)
          {
              _site = site;
          }    

          private void __IsSiteActivated(string title, string guid)
          {
              Assert.IsTrue(Feature.IsSiteActivated(_httpMysitesPersonal, guid));
          }
      }
  }

Here are the wrappers around the SharePoint API that we are going to get reuse. Sometimes we wrap the current API SPFarm:

public static class Solution
{
    public static bool IsDeployed(string wsp)
    {
        return SPFarm.Local.Solutions
            .Where(x => x.Name == wsp)
            .Any();
    }
} 

Sometimes we are wrapping our on wrappers Site.Open.

public static class Feature
{
    public static bool IsSiteActivated(string webUrl, string feature)
    {
        return Site.Open(webUrl, site => 
             site.Features
                .Where(x =>
                    x.Definition.Id == new Guid(feature) &&
                    x.Definition.Status == SPObjectStatus.Online).Any()
             );
    }
} 

Starting to wrap up the cadence for adding a feature

Below is the TODO list that we started with as we layered our code with a test automation pyramid straegy and are now up to adding new publishers

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

So far we have only implemented NewPage and still have ActivateFeature, RemovePage and MasterPage to go:

public class SiteFeatureEventReceiver : SPFeatureReceiver
{
    [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
    [ActivateFeature(PackageCfg.PublishingWebFeature)]
    [RemovePage("Pages/default.aspx")]
    [MasterPage("CustomV4.master", MasterPage.MasterPageType.User)]
    [MasterPage("CustomMySite.master", MasterPage.MasterPageType.Host)]
    public override void FeatureActivated(SPFeatureReceiverProperties properties)
    {
        PersonalSiteProvisioner
          .Create(properties.Feature.Parent as SPWeb)
          .Process();
    }
}

To bring in the new pages, it would be a mistake to start implementing the attributes. Instead, we need to write a system test. This may merely be an extension of the current system test. Then we need to proceed through unit and integration tests. What we’ll find is that as we add a new page we are likely to find that we are going to need to add a new concept of a factory of some sort that will return our publishers for each provisioning attribute.

So let me finish with the new acceptance test for [ActivateFeature(PackageCfg.PublishingWebFeature)] and a TODO list.

Story is Solution Deployment

In order to create new pages for user
As a user
I want a MySites available

With scenario have a new feature
  Given I have a new wsp package mysites.wsp
  When site is deployed
    And I am on site http://mysites/personal/45678
  Then Publishing Site Feature F6924D36-2FA8-4f0b-B16D-06B7250180FA is site activated
    And Publishing Feature is web activated

And the TODO now becomes:

TODO

  • have a new attribute ActivateFeature (unit)
  • be able to read the attribute (unit)
  • return a publisher for a page rather than a page ActivateFeaturePublisher? (unit)
  • extend service for processing a publisher
  • actually activate feature in the context of SharePoint (integration)
  • complete acceptance test (system)

Convinced?

This is a lot of work. Layering tends to do this. But it provides the basis for scaling code base. Quite quickly we have found that there are patterns for reuse, we pick up SharePoint API problems in integration tests rather than post-deployment in system tests, that we can move around the code base easily-ish, we can refactor because code is under test. All of this provides us with the possibility for scale benefits in terms of speed and quality. A system that is easy to code tends not to provide these scale benefits.

Categories: Uncategorized Tags: , ,

Test Strategy in SharePoint: Part 3 – Event Receiver as procedural, untestable feature

December 5th, 2010 No comments

Here I cover procedural, untestable code and to do so will cover three pieces of SharePoint code. The first is sample code that suggest just how easy it is to write SharePoint code. The second is production quality I code written in the style of the first and yet it becomes unmaintainable. The third piece of code is what we refactored the second piece to become and we think it’s maintainable. But it will come at cost (and benefit) that I explore in “Test Strategy in SharePoint: Part 4 – Event Receiver as layered Feature”.

Sample One: sample code is easy code

Here’s a current sample that we will find on the web that we find is actually the code that makes its way into production code rather than remaining sample code – so no offence to the author. Sharemuch writes:

When building publishing site using SharePoint 2010 it’s quite common to have few web parts that will make it to every page (or close to every page) on the site. An example could be a custom secondary navigation which you may choose to make a web part to allow some user configuration. This means you need to provision such web part on each and every page that requires it – right? Well, there is another solution. What you can do is to define your web part in a page layout module just like you would in a page. In MOSS this trick would ensure your web part will make it to every page that inherits your custom layout; not so in SharePoint 2010. One solution to that is to define the web part in the page layout, and programmatically copy web parts from page layout to pages inheriting them. In my case I will demonstrate how to achieve this by a feature receiver inside a feature that will be activate in site template during site creation. This way every time the site is created and pages are provisioned – my feature receiver will copy web parts from page layout to those newly created pages.

	public override void FeatureActivated(SPFeatureReceiverProperties properties)
	{
	    SPWeb web = properties.Feature.Parent as SPWeb;

	    if (null != web)
	    {
	        PublishingWeb pubWeb = PublishingWeb.GetPublishingWeb(web);
	        SPList pages = pubWeb.PagesList;

	        foreach (SPListItem page in pages.Items)
	        {
	            PublishingPage pubPage = PublishingPage.GetPublishingPage(page);
	            pubPage.CheckOut();
	            CopyWebParts(pubPage.Url, web, pubPage.Layout.ServerRelativeUrl, pubPage.Layout.ListItem.Web);
	            pubPage.CheckIn("Webparts copied from page layout");
	        }
	    }
	}

	private void CopyWebParts(string pageUrl, SPWeb pageWeb, string pageLayoutUrl, SPWeb pageLayoutWeb)
	{
	    SPWeb web = null;
	    SPWeb web2 = null;
	    SPpageWebPartManager pageWebPartManager = pageWeb.GetpageWebPartManager(pageUrl, PersonalizationScope.Shared);
	    SPpageWebPartManager pageLayoutWebPartManager = pageLayoutWeb.GetpageWebPartManager(pageLayoutUrl, PersonalizationScope.Shared);
	    web2 = pageWebPartManager.Web;
	    web = pageLayoutWebPartManager.Web;
	    SPLimitedWebPartCollection webParts = pageLayoutWebPartManager.WebParts;
	    SPLimitedWebPartCollection parts2 = pageWebPartManager.WebParts;
	    foreach (System.Web.UI.WebControls.WebParts.WebPart part in webParts)
	    {
	        if (!part.IsClosed)
	        {
	            System.Web.UI.WebControls.WebParts.WebPart webPart = parts2[part.ID];
	            if (webPart == null)
	            {
	                string zoneID = pageLayoutWebPartManager.GetZoneID(part);
	                pageWebPartManager.AddWebPart(part, zoneID, part.ZoneIndex);
	            }
	        }
	    }
	}

This sample code sets a tone that this is maintainable code. For example, there is some abstraction with the CopyWebParts method remaining separate from the activation code. Yet, if I put it against the four elements of simple design, the private method maximises clarity but won’t get past passing tests.

Let’s take a look at some production quality code that I have encountered then refactored to make it maintainable code.

Sample Two: easy code goes production

All things dev puts up the sample for Customising SharePoint 2010 MySites with Publishing Sites. Here is the code below that follows the same patterns: clarity is created through class scope refactoring of private methods. But we still still see magic string constants, local error handling, procedural style coding against the SharePoint API (in SetupMySite(). The result is code that is easy to write, easy to deploy, manual to test and reuse is through block-copy inheritance (ie copy and paste).

using System;
using System.Runtime.InteropServices;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;
using System.Linq;

namespace YourCompany.SharePoint.Sites.MySites
{

   [Guid("cd93e644-553f-4486-91ad-86e428c89723")]
   public class MySitesProvisionerReceiver : SPFeatureReceiver
   {

         private const string MySiteCreated = "MySiteCreated";
         private const string ResourceFile = "MySiteResources";
         private const uint _lang = 1033;

         public override void FeatureActivated(SPFeatureReceiverProperties properties)
         {

		         using (SPWeb web = properties.Feature.Parent as SPWeb)
		         {
		            //run only if MySite hasn't been created yet as feature could be run after provisioning as well
		            if (web.Properties.ContainsKey(MySiteCreated))
		              return;

		            ActivatePublishingFeature(web);
		            SetUpMySite(web);
		         }
   			 }

			   private void ActivatePublishingFeature(SPWeb web)
			   {

			         //Activate Publishing Web feature as the stapler seems to not do this consistently
			         try
			         {
			             web.Features.Add(new Guid("94C94CA6-B32F-4da9-A9E3-1F3D343D7ECB"));
			         }
			         catch (Exception)
			         {
			             //already activated
			          }

			   }

			   private void SetUpMySite(SPWeb web)
			   {

			         //turn off versioning, optional but keeps it easier for users as they are the only users of their MySite home page 
			         var pubWeb = PublishingWeb.GetPublishingWeb(web);
			         var pages = pubWeb.PagesList;
			         pages.EnableVersioning = false;
			         pages.ForceCheckout = false;
			         pages.Update();

			         //set custom masterpage
			         var customMasterPageUrl = string.Format("{0}/_catalogs/masterpage/CustomV4.master", web.ServerRelativeUrl);
			         web.CustomMasterUrl = customMasterPageUrl;
			         web.MasterUrl = customMasterPageUrl;

			         var layout = pubWeb.GetAvailablePageLayouts().Cast<PageLayout>()
			                                                  .Where(p => p.Name == "HomePage.aspx")
			                                                  .SingleOrDefault();

			         //set default page
			         var homePage = pubWeb.GetPublishingPages().Add("Home.aspx", layout);
			         homePage.Title = "Home Page";
			         homePage.Update();
			         pubWeb.DefaultPage = homePage.ListItem.File;

			         //Add initial webparts
			         WebPartHelper.WebPartManager(web,
			         homePage.ListItem.File.ServerRelativeUrl,
			         Resources.Get(ResourceFile, "MySiteSettingsListName", _lang),
			         Resources.Get(ResourceFile, "InitialWebPartsFileName", _lang));

			         web.AllowUnsafeUpdates = true;
			         web.Properties.Add(MySiteCreated, "true");
			         web.Properties.Update();
			         pubWeb.Update();

			         //set the search centre url
			         web.AllProperties["SRCH_ENH_FTR_URL"] = Resources.Get(ResourceFile, "SearchCentreUrl", _lang);
			         web.Update();

			         //delete default page
			         var defaultPageFile = web.GetFile("Pages/default.aspx");
			         defaultPageFile.Delete();
			         web.AllowUnsafeUpdates = false;

			   }
     }
}	
	

There is for me one more key issue. What does it really do? I was struck by the unreadability of this code and was concerned that there are so many working parts here and how they would all be combined.

Sample Three: wouldn’t this be nice?

Here’s what we refactored that code to. Hopefully there is some more intention in this. You may read it like this: I have a Personal Site that I create and process with a new page, removing and exsiting page, being activated and then getting a couple of master pages.

I like this because I can immediately ask simple questions, why do I have to remove an existing page and why are there two master pages. It’s SharePoint and there’s of course good reasons. But I now am abstracting away what SharePoint has to do for me to get this feature activated. It’s not perfect but is a good enough example to work on.

using System.Runtime.InteropServices;
using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Infrastructure;
using YourCompany.SharePoint.Infrastructure.Configuration;
using Microsoft.Office.Server.UserProfiles;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;

namespace YourCompany.SharePoint.MySites.Features.WebFeatureProvisioner
{
    [Guid("cb9c03cd-6349-4a1c-8872-1b5032932a04")]
    public class SiteFeatureEventReceiver : SPFeatureReceiver
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        [ActivateFeature(PackageCfg.PublishingWebFeature)]
        [RemovePage("Pages/default.aspx")]
        [MasterPage("CustomV4.master", MasterPage.MasterPageType.User)]
        [MasterPage("CustomMySite.master", MasterPage.MasterPageType.Host)]
        public override void FeatureActivated(SPFeatureReceiverProperties properties)
        {
            PersonalSiteProvisionerFactory
              .Create(properties.Feature.Parent as SPWeb)
              .Process();
        }
    }
}	

What I want to suggest is that this code is not necessarily easy to write given the previous solution. We are going to have to bake our own classes around the code: there’s the factory class and the attributes too – we’ll also find that if there are the classes that the factory returns too. In the end, we are should have testable code and my hunch is that we are likely to get some reuse too.

The next entry, “Test Strategy in SharePoint: Part 4 – Event Receiver as layered Feature”, will look at how we can layer the code to make this code become a reality.

Categories: Uncategorized Tags: , ,

Test Strategy in SharePoint: Part 2 – good layering to aid testability

November 28th, 2010 No comments

Test Strategy in SharePoint: Part 2 – good layering to aid testability

Overall goal: write maintainable code over code that is easy to write (Freeman and Price, 2010)

In Part 1 – testing poor layering is not good TDD I have argued that we need to find better ways to think about testing SharePoint wiring code that do not confuse unit and integration tests. In this post, I explain that outline a layering strategy for solutions resolving this problem. Rather than only one project for code and one for tests, I use 3 projects for tests and 4 for the code – this strategy is based on DDD layering and the test automation pyramid.

  • DDD layering projects: Domain, Infrastructure, Application and UI
  • Test projects: System, Integration and Unit

Note: this entry does not give code samples – the next post will – but focuses how the projects are organised within the Visual Studio solution, how they are sequenced when programming. I’ve included a task sheet that we use in Sprint Planning that we use a boiler plate list to mix and match scope of features. Finally, I have a general rave on the need for disciplined test-first and test-last development.

Projects

Here’s the quick overview of the layers, take a look further down for fuller overview.

  • Domain: the project which has representations of the application domain and have no references to other libraries (particularly SharePoint)
  • *Infrastructure: *this project references Domain and has the technology specific implementations. In this case, it has all the SharePoint API implementations
  • Application: this project is a very light orchestration layer. It is a way to get logic out of the UI layer to make it testable. Currently, we actually put all our javascript jQuery widgets in the project (that I will post about later because we unit (BDD-style) all our javascript and thus need to keep it away from the UI)
  • UI: this is the wiring code for SharePoint but has little else – this will more sense once you can see that we Integration test all SharePoint API and this code goes to Infrastructure and that we unit test any models, services or validation and that we put these in Domain. For example, with Event Receivers code in methods is rarely longer than a line or two long.

Test Projects

  • System Acceptance Test: Business focused test that describes the system – these tests should live long term reasonably unchanged
  • System Smoke Test: Tests that can run in any environment that confirm that it is up and running
  • Integration Test: Tests that have 1 dependency and 1 interaction that are usually against third-party API and in this case mainly the SharePoint API - these may create scenarios on each method
  • Unit Test: Tests that have no dependencies (or are mocked out) – model tests, validations, service tests, exception handling

Solution Structure

Below is the source folder of code in the source repository (ie not lib/, scripts/, tools/). The solution file (.sln) lives in the src/ folder.

Taking a look below, we see our 4 layers with 3 tests projects. In this sample layout, I have include folders which suggest that we have code around the provisioning and configuration of the site for deployment – see here for description of our installation strategy. These functional areas exists across multiple projects: they have definitions in the Domain, implementation in the Infrastructure and both unit and integration tests.

I have also included Logging because central to any productivity gains in SharePoint is to use logging and avoid using a debugger. We now rarely attach a debugger for development. And if we do it is not our first tactic as was the previous case.

You may also notice Migrations/ in Infrastructure. These are the migrations that we use with migratordotnet.

Finally, the UI layer should look familiar and this is a subset of folders.

src/
  Application/

  Domain/
    Model/
	    Provisioning/
	    Configuration/
		Services/
	    Provisioning/
	    Configuration/
    Logging/
  
  Infrastructure/
    Provisioning/
    Configuration/
    Logging/    

  Tests.System/
    Acceptance/
	    Provisioning/
	    Configuration/
    Smoke/
  
  Tests.Integration/
    Fixtures/
    Provisioning/
    Configuration/
    Migrations/

  Tests.Unit/
    Fixtures/
    Model/
    Provisioning/
    Configuration/
    Logging/
    Services/
    Site/    
    
  Ui/
    Features/
    Layouts/
    Package/
    PageLayouts/
    Stapler/
    ...

Writing code in our layers in practice

The cadence of the developers work is also based on this separation. It generally looks like this:

  1. write acceptance tests (eg given/when/then)
  2. begin coding with tests
  3. sometimes starting with Unit tests – eg new Features, or jQuery widgets
  4. in practice, because it is SharePoint, move into integration tests to isolate the API task
  5. complete the acceptance tests
  6. write documentation of SharePoint process via screen shots

We also have a task sheet for estimation (for sprint planning) that is based around this cadence.

Task Estimation for story in Scrum around SharePoint feature

A note on test stability

Before I finish this post and start showing some code, I just want to point out that getting stable deployments and stable tests requires discipline. The key issues to allow for are the usual suspects:

  • start scripted deployment as early as possible
  • deploy with scripts as often as possible, if not all the time
  • try to never deploy or configure through the GUI
  • if you are going to require a migration (GUI-based configuration) script it early because while it is faster to do through the GUI this is a developer-level (local) optimisation for efficiency and won’t help with stabilisation in the medium term
  • unit tests are easy to keep stable – if they aren’t then you are serious in trouble
  • integration tests are likely to be hard to keep stable – ensure that you have the correct setup/teardown lifecycle and that you can fairly assume that the system is clean
  • as per any test, make sure integration tests are not dependent on other tests (this is standard stuff)
  • system smoke tests should run immediately after an installation and should be able to be run in any environment at any time
  • system smoke tests should not be destruction precisely because they are run in any environment including production to check that everything is working
  • system smoke tests shouldn’t manage setup/teardown because they are non-destructive
  • system smoke tests should be fast to run and fail
  • get all these tests running on the build server asap

Test-first and test-last development

TDD does not need to be exclusively test-first development. I want to suggest that different layer require different strategies but most importantly there is a consistency to the strategy to help establish cadence. This cadence is going to reduce transaction costs – knowing when done, quality assurance for coverage, moving code out of development. Above I outlined writing code in practice: acceptance test writing, unit, integration and then acceptance test completion.

To do this I test-last acceptance tests. This means that as developers we write BDD style user story (give/when/then) acceptance tests. While this is written first, it rarely is test-driven because we might not then actually implement the story directly (although sometimes we do). Rather we park it. Then we move into the implementation which is encompassed by the user story but we then move into classical unit test assertion mode in unit and integration tests. Now, there is a piece of code that it clearly unit testable (models, validation, services) this is completed test first – and we pair it, we use Resharper support to code outside-in. We may also need to create data access code (ie SharePoint code) and this is created with integration tests. Interestingly, because it is SharePoint we break many rules. I don’t want devs to write Infrastructure code test last but often we need to spike the API. So, we actually spike the code in the Integration test and then refactor to the Infrastructure as quickly as possible. I think that this approach is slow and that we would be best to go to test-first but at this stage we are still getting a handle on good Infrastructure code to wrap the SharePoint API. The main point is that we don’t have untested code in Infrastructure (or Infrastructure code lurking in the UI). These integration tests in my view are test-last in most cases simply because we aren’t driving design from the tests.

At this stage, we have unfinished system acceptance tests, code in the domain and infrastructure (all tested). What we then do is hook the acceptance test code up. We do this instead of hooking up the UI because then we don’t kid ourselves whether or not the correct abstraction has been created. In hooking up the acceptance tests, we can simply hook up in the UI. However, the reverse has often not been the case. Nonetheless, the most important issue that we have hooked up our Domain/Infrastructure code by two clients (acceptance and UI) and this tends to prove that we have a maintainable level of abstraction for the current functionality/complexity. This approach is akin to when you have a problem and you go to multiple people to talk about it. By the time you have had multiple perspectives, you tend to get clarity about the issues. Similarly, in allowing our code to have multiple conversations in the form of client libraries consume them, we know the sorts of issues are code are going have – and hopefully, because it is software, we have refactored the big ones out (ie we can live the level of cohesion and coupling for now).

I suspect for framework or even line of business applications, and SharePoint being one of many, we should live with the test-first and test-last tension. Test-first is a deep conversation that in my view covers off so many of the issues. However, like life, these conversations are not always the best to be had every time. But for the important issues, they will always need to be had and I prefer to have them early and often.

None of this means that individual developers get to choose which parts get test-first and test-last. It requires discipline to use the same sequencing for each feature. This takes time for developers to learn and leadership to encourage (actually, enforce, review and refine). I am finding that team members can learn the rules of the particular code base in between 4-8 weeks if that is any help.

Test Strategy in SharePoint: Part 1 – testing poor layering is not good TDD

November 27th, 2010 2 comments

Test Strategy in SharePoint: Part 1 – testing poor layering is not good TDD

Overall goal: write maintainable code over code that is easy to write (Freeman and Price, 2010)

My review of SharePoint and TDD indicated that we need to use layering techniques to isolate SharePoint and not to confuse unit tests with integration tests. Let’s now dive into what is the nature of the code we write in SharePoint.

This post of one of two posts. This first one will review two existing code samples that demonstrate testing practices: SharePoint Magic 8 Ball & Unit Testing SharePoint Foundation with Microsoft Pex and Moles: Tutorial for Writing Isolated Unit Tests for SharePoint Foundation Applications. I argue against using these practices because this type of mocking encourages poor code design through exclusively using unit tests and not isolating approach integration tests. In the second post, I will cover examples that demonstrate layering and separation of unit and integration tests, including how to structure Visual Studio solutions.

What is the nature of SharePoint coding practice?

At this stage, I am not looking at “customisation” code (ie WebParts) but rather “extension” code practices where we effectively configure up the new site.

  1. programming code is often “wiring” deployment code (eg files mostly of xml) that then needs to be glued together via .net code against the SharePoint API.
  2. resulting code is problematically procedural in nature and untestable outside its deployment environment
  3. this makes its error prone and slow
  4. it also encourages developers to work in large batches, across longer than needed time frames and provide slow feedback
  5. problems tend to occur in later environments

A good summary how the nature of a SharePoint solution may not lend itself to classical TDD:

So, now lets take a typical SharePoint project. Of course there is a range and gamut of SharePoint projects, but lets pick the average summation of them all. Thus in a typical SharePoint project, the portion where TDD is actually applicable is very small – which is the writing code part. In most, not all, SharePoint projects, we write code as small bandaids across the system, to cover that last mile – we don’t write code to build the entire platform, in fact the emphasis is to write as little code as possible, while meeting the requirements. So already, the applicability of TDD as a total percentage of the project is much smaller. Now, lets look at the code we write for SharePoint. These small bandaids that can be independent of each other, are comprised of some C#/VB.NET code, but a large portion of the code is XML files. These large portion of XML files, especially the most complex parts, define the UI – something TDD is not good at testing anyway. Yes I know attempts have been made, but attempt != standard. And the parts that are even pure C#, we deal with an API which does not lend itself well to TDD. You can TDD SharePoint code, but it’s just much harder.

What we found when stuck to the procedural, wiring up techniques is:

  1. write, repetitive long blocks of code
  2. funky behaviours in SharePoint
  3. long periods of time sitting around watching the progress bar in the browser
  4. disagreement and confusion between (knowledgable) developers on what they should be doing and what was happening
  5. block copy inheritance (cut-and-paste) within and between visual studio solution
  6. no automated testing

Looking at that problem we have formulated the charitable position that standard SharePoint techniques allow you to write easy code – or more correctly, allow you to copy and paste others code and hack it to your needs. We decided that we would try to write maintainable code. This type of code is layered and SOLID

Okay so where are the code samples for us to follow?

There are two main samples to look at and both try and deal with the SharePoint API. Let me cover these off first before I show where we went.

SharePoint Magic 8 Ball

SharePoint Magic 8 Ball is some sample source code from Best Practices SharePoint conference in 2009. This a nice piece of code to demonstrate both how to abstract away SharePoint and how not to worry about unit testing SharePoint.

Let me try and explain. The functionality of the code is that you ask a question of a “magic” ball and get a response – the response is actually just picking an “answer” randomly from a list. The design is a simple one in two parts: (1) the list provider for all the answers and (2) the picker (aka the Ball). The list is retrieved from SharePoint and the picker takes a list from somewhere.

The PickerTest (aka the ball)

 [TestFixture]
 public class BallPickerTest
 {
     [Test]
     public void ShouldReturnAnAnswerWhenAskedQuestion()
     {
         var ball = new Ball(); 
         answer = ball.AskQuestion("Will it work?");

         Assert.IsNotNull(answer);
     }
} 

And the Ball class

public class Ball
{
    public List<string> Answers { get; set }
    
    public Ball()
    {
        Answers = new List<string>{"yes"};
    }

    public string AskQuestion(string p)
    {
        Random random = new Random();
        int item = random.Next(Answers.Count);
        return Answers[item];
    }
}

That’s straightforward. So then the SharePoint class gets the persisted list.

public class SharePoint
{
    public List<string> GetAnswersFromList(string ListName)
    {
        List<String> answers = new List<string>();
        SPList list = SPContext.Current.Web.Lists[ListName];
        foreach (SPListItem item in list.Items)
        {
            answers.Add(item.Title);
        }
        return answers;
    }
}

Here’s the test that shows how you can mock out SharePoint. In this case, the tester is using TypeMock. Personally, what I am going to argue is that isn’t an appropriate test. You can do it, but I wouldn’t bother. I’ll come back to how I would rather write an integration test.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NUnit.Framework;
using TypeMock.ArrangeActAssert;
using Microsoft.SharePoint;

namespace BPC.Magic8Ball.Test
{
    [TestFixture]
    public class SharePointTest
    {
        [Test]
        public void ShouldReturnListOfAnswersFromSharePointList()
        {
              SPList fakeList = Isolate.Fake.Instance<SPList>();
              Isolate.WhenCalled(() => SPContext.Current.Web.Lists["ListName"]).WillReturn(fakeList);
              SPListItem fakeAnswerOne = Isolate.Fake.Instance<SPListItem>();
              Isolate.WhenCalled(() => fakeAnswerOne.Title).WillReturn("AnswerZero");
              Isolate.WhenCalled(() => fakeList.Items).WillReturnCollectionValuesOf(new List<SPItem>{fakeAnswerOne });                              
  
              var answers = new SharePoint().GetAnswersFromList("ListName");

              Assert.AreEqual("AnswerOne", answers.First());
        }
    }
}

In the code above, it nicely demonstrates that TypeMock can mock out a static class that lives somewhere else. Put differently, you don’t have use dependency injection patterns to do your mocking. I want to argue that this is a poor example because this is not a practical unit test – and I’m sure that if I wasn’t lazy I could find the test smell here already documented in xUnit Test Patterns by Gerald Mezaros. Least the naming of the classes!

The key problem is that sample project is that the real test here is that there is good decoupling between the Picker (the ball) and the answers (sharepoint list). If there is any mocking to go on, it should be that the answers (sharepoint list) is actually invoked. If this the case, then this also points to a potential code smell and that is that the dependency of the answer list (either as List or SharePoint) is documented clearly. In other words, it might want to be passed in into the constructor. Now you might even argue that the ball should have a property for the answers but rather a reference to the SharePoint (answer getter). This may seem a small point but are important because we need designs that scale – and decoupling has been proven to do so.

So, instead of this code:

var ball = new Ball();
var sharePoint = new SharePoint();

ball.Answers = sharePoint.GetAnswersFromList(ListName);
var answer = ball.AskQuestion("MyQuestion");

You are more likely to have if hand in the list:

var ball = new Ball(new SharePoint.GetAnswersFromList(ListName));
ball.Answers = sharePoint.GetAnswersFromList("Your answer");
var answer = ball.AskQuestion("MyQuestion");

Or if the SharePoint is a dependency:

var ball = new Ball(new SharePoint(ListName));
ball.Answers = sharePoint.GetAnswersFromList("Your answer");
var answer = ball.AskQuestion("MyQuestion");


What is at the heart of this sample is that it is trying to unit test everything. Instead it should split tests into unit and integration tests. I have documented elsewhere very specific definitions of unit and integration tests:

  • Unit – a test that has no dependencies (do our objects do the right thing, are they convenient to work with?)
  • Integration – a test that has only one dependency and tests one interaction (usually, does our code work against code that we can’t change?)

What would I do?

  • unit test the ball – there are a number of tests around the AskQuestion() method – does the randomness work, can I hand it a list of answers. If I am going to hand in the SharePoint class – I can then can handin a fakeSharePoint class for stubs and then mock the class to check that it actually calls the GetAnswersFromList() method.
  • integration test the SharePoint class – the integration test requires that SharePoint is up and running. This is a great integration test because it checks that you have all your ducks lined up for a SPContext call and that you have used the correct part of the API. Interestingly, the harder part of the integration tests are getting the setup context correct. Having to deal with the Setup (and Teardown) now is in fact one of the most important benefits of these types of integration tests. We need to remember that these integration APIs merely confirm that the API is behaving as we would expect in the context in which we are currently working. No more. No less.

In summary, this sample showing particular layering in the code but not in the testing. Layering your tests is as important. In SharePoint, we have had the most success in creating abstractions exclusively for SharePoint and testing these as integration tests. These SharePoint abstractions have also been created in a way that we can hand in mock implementations into other layers so that we can unit test the parts of the code that have logic. This is a simple design that effectively makes SharePoint just another persistence-type layer. There is a little more to it than that but that isn’t for here.

Let’s turn to Pex and Moles and see if it provides an option any better to TypeMock. I suspect not because the issue here isn’t the design of the test system – these both can intercept classes hidden and used somewhere else with my own delegates – but rather the design of our code. I’m hoping to use tools that help me write better more maintainable code – so far both of them look like the “easy code” smell.

Pex and Moles

So I’ve headed off to Unit Testing SharePoint Foundation with Microsoft Pex and Moles: Tutorial for Writing Isolated Unit Tests for SharePoint Foundation Applications. It is a great document that tells us why SharePoint is not a framework built with testability baked in.

The Unit Testing Challenge. The primary goal of unit testing is to take the smallest piece of testable software in your application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Unit testing has proven its value, because a large percentage of defects are identified during its use.
The most common approach to isolate production code in a unit test requires you to write drivers to simulate a call into that code and create stubs to simulate the functionality of classes used by the production code. This can be tedious for developers, and might cause unit testing to be placed at a lower priority in your testing strategy.

It is especially difficult to create unit tests for SharePoint Foundation applications because:

* You cannot execute the functions of the underlying SharePoint Object Model without being connected to a live SharePoint Server.

* The SharePoint Object Model-including classes such as SPSite and SPWeb-does not allow you to inject fake service implementations, because most of the SharePoint Object Model classes are sealed types with non-public constructors.

Unit Testing for SharePoint Foundation: Pex and Moles. This tutorial introduces you to processes and concepts for testing applications created with SharePoint Foundation, using:

* The Moles framework – a testing framework that allows you to isolate .NET code by replacing any method with your own delegate, bypassing any hard-coded dependencies in the .NET code.

* Microsoft Pex – an automated testing tool that exercises all the code paths in a .NET code, identifies potential issues, and automatically generates a test suite that covers corner cases.

Microsoft Pex and the Moles framework help you overcome the difficulty and other barriers to unit testing applications for SharePoint Foundation, so that you can prioritize unit testing in your strategy to reap the benefits of greater defect detection in your development cycle.

The Pex and Moles sample code works through this example. It shows you how to mock out all the dependencies required from this quite typical piece of code. Taking a quick look we have these dependencies:

  • SPSite returning a SPWeb
  • Lists on web
  • then GetItemById on Lists returning an SPListItem as item
  • SystemUpdate on item
public void UpdateTitle(SPItemEventProperties properties) 
{ 
	using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) 
	{ 
		SPList list = web.Lists[properties.ListId]; 
		SPListItem item = list.GetItemById(properties.ListItemId); 
		item["Title"] = item["ContentType"]; 
		item.SystemUpdate(false); 
	} 
}

Here’s a quick sample to give you a feeling if don’t want to read the pdf. In this code, the first thing to know is that we use a “M” prefix by convention to intercept classes. In this case, MSPSite intercepts the SPSite and then returns a MSPWeb on which we override the Lists and return a MSPListCollection.

string url = "http://someURL"; 

MSPSite.ConstructorString = (site, _url) => 
{ 
	new MSPSite(site) 
	{ 
		OpenWeb = () => new MSPWeb
		{ 
			Dispose = () => { }, 
			ListsGet = () => new MSPListCollection 

 ...
 

There is no doubt that Pex and Moles is up to the job of mocking these out. Go and look at the article yourself. Others have commented on getting use to the syntax and I agree that because I am unfamiliar it is not as easy as say Moq. But it seems similar to MSpec. That’s not my gripe. My grip is that the tool helps bake badness into my design. Take the example where the sample adds validation logic into the above code, the authors seem to think that this is okay.

public void UpdateTitle(SPItemEventProperties properties) 
{ 
	using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) 
	{ 
		SPList list = web.Lists[properties.ListId]; 
		SPListItem item = list.GetItemById(properties.ListItemId); 
		
		string content = (string)item["ContentType"]; 
		if (content.Length < 5) 
			throw new ArgumentException("too short"); 
		if (content.Length > 60) 
			throw new ArgumentOutOfRangeException("too long"); 
		if (content.Contains("\r\n")) 
			throw new ArgumentException("no new lines"); 
		item["Title"] = content; 

		item.SystemUpdate(false); 
	} 
}	

In the sample, described as “a more realistic example to test”, it just encourages poor separation when writing line of business applications. This is validation logic and there is no reason whatsoever to test validation alongside the need for data connections. Furthermore, there is little separation of concerns around exception throwing and handling and logging.

I’m therefore still frustrated that we are being shown easy to write code. Both Pex and Moles and TypeMock (regardless of how cool or great they are) are solutions for easy to write code. In this sample, we should see get separation of concerns. There are models, there are validators, there is the checking for validations and error handling. All these concerns can be written as unit tests and with standard libraries.

We will then also need integration tests to check that we can get list items. But these can be abstracted into other classes and importantly other layers. If we do that we will also avoid the duplication of code that we currently see with the using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) code block. I am going to return to this in another post which is less about layering but about some specific SharePoint tactics once we have a layering strategy in place.

Categories: Uncategorized Tags: , ,

SharePoint deployment packaging

November 27th, 2010 No comments

In this blog post I discuss a strategy for adding migrations and scripting to the deployment of SharePoint solutions. What was implicit in their was that we were packing the SharePoint solutions (wsp file) into a release package that had scripting. I documented here the types of scripts we used in order to get automated deployments.

Taking a step back, the reasons we went down this approach was that in practice:

  1. in the first sprint, we lost 2.5 days times 3 developers worth of work alone in SharePoint API (funky-ness: eg poor error reporting, API bugs)
  2. developers are often waiting long periods of time for deployments
  3. when we did do “quick” deploys they were unreliable (ie pushing into the 14 hive), say with a javascript deployment (eg CKS.Dev.Server.vsix)

To do this, we had a take step back from delivering functions and delivering stability. This actually took us about a month to get under control – I wish I could send a particular corporation the bill for an under-finished product.

Therefore, beware. If you are reading this and this is new to you, you are likely to need to build in an initial theme over the first month or so: Stability of code base and its deployment. This is of course not a problem exclusive to SharePoint.

Approach:

  • map out the value stream for deployment through environment, identify pain points (result: scripting but not automation)
  • start scripted deployments
  • solve instability issues

Solutions:

  • versioned release packages – see below for the packing.proj msbuild tasks
  • scripted deployments
  • migrations as part of deployment
  • general rule of no

Results

  1. a standard deployment now takes between 1-3 mins (each scripted installation reports time taken)
  2. the biggest lag is still the bootstrapping process that SharePoint undergoes with a new deployment

package.proj

This packaging project is an adaption on the sample deployment project found here. In that sample, you’ll see the need for dependencies in the lib/ folder (such as 7zip for compression). Down in the script you might also notice that we include migratordotnet for managing releases which actually requires original binaries – I need to get back to this issue and try not to included binaries like this.

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="HelpBuild"  ToolsVersion="3.5" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
	    <Major>0</Major>
	    <Minor>1</Minor>
	    <Build>0</Build>
	  </PropertyGroup>

  <PropertyGroup>
    <Revision Condition="'$(Revision)'==''">0</Revision>
    <Version Condition="'$(Version)'==''">$(Major).$(Minor).$(Build).$(Revision)</Version>
    <DropLocation Condition="'$(DropLocation)'==''">$(MSBuildProjectDirectory)\CodeToDeploy\Publish</DropLocation>
    <BuildCmd Condition="'$(BuildCmd)'==''">ReBuild</BuildCmd>
    <ReleaseEnvironment Condition="'$(ReleaseEnvironment)'==''">Test</ReleaseEnvironment>


    <ReleaseName>MySites-$(ReleaseEnvironment)</ReleaseName>
    <ReleasePath>$(DropLocation)\..\Releases</ReleasePath>
    <DropLocationWsp>$(DropLocation)\wsp</DropLocationWsp>
    <BinariesRoot>$(MSBuildProjectDirectory)\src\Site</BinariesRoot>
    <LibRoot>$(MSBuildProjectDirectory)\lib</LibRoot>
    <WspRoot Condition="'$(WspOutDir)'==''">$(BinariesRoot)</WspRoot>
    <ReleaseZipFile>$(ReleasePath)\Site-$(Version).zip</ReleaseZipFile>

    <ExtractPath>$(DropLocation)\..\Deploy</ExtractPath>

    <Zip>$(MSBuildProjectDirectory)\lib\7z\7z.exe</Zip> 
  </PropertyGroup>

  <ProjectExtensions>
    <Description>Build Releasable MySites</Description>
  </ProjectExtensions>

  <ItemGroup>
    <SolutionToBuild Include="src\Site.sln">
      <Properties>Configuration=Release;Platform=x64;OutDir=$(WspRoot)\bin\x64\Release\</Properties>
    </SolutionToBuild>
    <WspToBuild Include="$(WspRoot)\UI.csproj">
      <Properties>Configuration=Release;Platform=x64;OutDir=$(WspRoot)\bin\x64\Release\</Properties>
    </WspToBuild>
    <WspFiles Include="$(BinariesRoot)\**\*\*.wsp" />
  </ItemGroup>

  <ItemGroup>
	  
    <TasksFiles Include="scripts\deploy.ps1" />
    <TasksFiles Include="scripts\deploy\migrations.ps1;
                         scripts\deploy\deploy.ps1;
                         scripts\psake.psm1;
                         scripts\deploy\install.ps1" />
    
    <MigrationFiles Include="$(WspRoot)\bin\x64\Release\Infrastructure.dll;
                             $(WspRoot)\bin\x64\Release\Domain.dll" /> 
    <MigratorFiles Include="$(LibRoot)\migratordotnet\*" /> 
  </ItemGroup>

  <Target Name="Package" DependsOnTargets="Clean;Version-Writeable;Version;Compile;Version-Reset;Version-ReadOnly;Publish;Zip"/>
  <Target Name="Install" DependsOnTargets="Package;Extract;Deploy"/>

  <Target Name="Compile">
    <MSBuild Projects="@(SolutionToBuild)" Targets="Rebuild" />
    <CallTarget Targets="PackageWsp" />
  </Target>

  <Target Name="PackageWsp">
    <MSBuild Projects="@(WspToBuild)" Targets="ReBuild;Package" />
  </Target>

  <Target Name="Publish">
    <MakeDir Directories="$(DropLocationWsp)" Condition = "!Exists('$(DropLocationWsp)')" />
    
    <Copy SourceFiles="@(WspFiles)" DestinationFolder ="$(DropLocationWsp)" SkipUnchangedFiles="true"/>
    <Copy SourceFiles="@(TasksFiles)" DestinationFolder ="$(DropLocation)\scripts\%(TasksFiles.RecursiveDir)" SkipUnchangedFiles="true" />
    
    <Copy SourceFiles="@(MigrationFiles)" DestinationFolder ="$(DropLocation)\lib\%(MigrationFiles.RecursiveDir)" SkipUnchangedFiles="true" />
    <Copy SourceFiles="@(MigratorFiles)" DestinationFolder ="$(DropLocation)\lib\migratordotnet\%(MigratorFiles.RecursiveDir)" SkipUnchangedFiles="true" />
    
    <Exec Command="echo .> &quot;$(DropLocation)\$(Version)&quot;"/>
    
    <Exec Command="echo powershell -ExecutionPolicy Unrestricted -Command &quot; &amp;{import-module .\scripts\psake.psm1; Invoke-Psake Install}&quot; > &quot;$(DropLocation)\Install.bat&quot;"/>
    <Exec Command="echo powershell -ExecutionPolicy Unrestricted -NoExit -Command &quot; &amp;{import-module .\scripts\psake.psm1; Invoke-Psake -docs}&quot; > &quot;$(DropLocation)\here.cmd&quot;"/>  
    <Exec Command="echo Include .\scripts\migrations.ps1; Include .\scripts\deploy.ps1; Include .\scripts\install.ps1 > &quot;$(DropLocation)\default.ps1&quot;"/>  
  </Target>

  <Target Name="Clean" DependsOnTargets="CleanPublish;CleanReleases" >
    <RemoveDir Directories="$(DropLocation)\.." />
  </Target>

  <Target Name="CleanPublish">
    <RemoveDir Directories="$(DropLocation)" />
  </Target>

  <Target Name="CleanReleases">
    <RemoveDir Directories="$(ReleasePath)" />
    <RemoveDir Directories="$(ExtractPath)" />
  </Target>

  <Target Name="Zip">
    <MakeDir Directories="$(ReleasePath)" Condition = "!Exists('$(ReleasePath)')" />
    <Exec Command="$(Zip) a -tzip %22$(ReleaseZipFile)%22" WorkingDirectory="$(DropLocation)"/>
  </Target>

  <Target Name="Extract">
    <MakeDir Directories="$(ExtractPath)"  Condition = "!Exists('$(ExtractPath)')"/>
    <Exec Command="$(Zip) x %22$(ReleaseZipFile)%22 -o$(ExtractPath)" WorkingDirectory="$(DropLocation)"/>
  </Target>

  <Target Name="Deploy">
    <Exec Command="$(ExtractPath)\deploy.bat $(ExtractPath)"  WorkingDirectory="$(ExtractPath)" ContinueOnError="false" />
  </Target>


  <Target Name="HelpBuild">
    <Message Text="

    msbuild /t:Package

    Examples:
      msbuild /t:Install
      msbuild /t:Install /p:ReleaseEnvironment=Test
      msbuild @build.properties /t:Package /v:d 

    Variables that can be overridden:
      DropLocation=C:\Binaries
      ReleaseEnvironment=Dev|[Test]|Prod
      BuildCmd=Build|[Rebuild] 

    Targets:
     - Compile
     - Clean
     - CleanReleases
     - Publish
     - Zip
     - Extract
     - Deploy
     - PackageWsp

     - Package (Compile;Clean;Publish;Zip)
     - Install (Compile;Clean;Publish;Zip;Extract;Deploy)

    Log output: msbuild.log
             " />
  </Target>

</Project>	
Categories: Uncategorized Tags: , ,

sharepoint-deployment-scripting-with-powershell

November 26th, 2010 No comments

PowerShell scripting approach for SharePoint (via psake)

This is a simple plea to SharePointers. Can we please have a decent build runner around PowerShell scripts in the deployment environment. Most developers in other frameworks do. So I thought that I would outline some basic techniques. psake is as good as any other options (except rake which I still think is vastly superior but I don’t want the operational hassle of needing it in other environments). Here’s what I’m doing:

  • powershell wrapped in psake
  • modularised scripts
  • psake helpers scripts for the real world (and documented in other places)
  • helper scripts for each of access to command line powershell

One caveat, psake has a couple of limitations that make it otherwise a good rake port:

  • it doesn’t namespace (AFAIK) tasks and that means it effectively works in the global namespace – this means we get conflicts and have to resort to poor naming practices
  • it doesn’t create good documentation for nested scripts – that seriously sucks – I want to be able to do -docs and read what I can do
  • it really doesn’t do logging well when moving across scripts (Start-Transcript seems to suck – but that might be my knowledge)

If you don’t go down this road what you end up with is a load of powershell scripts that we double click on. That’s not my idea of well structured or maintainable. Skip straight down to deploy.ps1 if want to see what the scripts look like rather than all the plumbing. And, yes, there is way too much plumbing! Here is the script where I actually do the packaging up of these files.

File layout

default.ps1
here.cmd
deploy.cmd
scripts/
    psake.psm1
    psake1.ps1
    deploy.ps1
    install.ps1
wsp/
    MySolution.wsp

Quick explanation:

  • psake1.ps1, psake.psm1 – both come with psake. psm is a module and the make DSL engine – go there to understand what to do. psake1 is a help wrapper.
  • default.ps1 is a manifest that links all your psake tasks – psake looks for default.ps1 by, well, default.
  • deploy.cmd is a GUI-based wrapper that invoke a specific tasks (deploy in this case)
  • here.cmd brings up a command line interface so that I can use all the psake tasks (eg Invoke-psake Deploy)
  • deploy.ps1, install.ps1 are my two specific sets of tasks – deploy has SharePoint cmdlets linked in here (install is a further wrapper that does logging and in my own case also invokes other scripts I haven’t included like migrations)

How to use

Option One:

  1. Click deploy.cmd

Option Two:

  1. Click here.cmd – this will list out the available commands
  2. Be able to type in powershell commands
Invoke-psake Deploy

Files

deploy.cmd

powershell -ExecutionPolicy Unrestricted -NoExit -Command "&{import-module .\scripts\psake.psm1; Invoke-Psake Install}"

default.ps1

Include .\scripts\deploy.ps1; 
Include .\scripts\install.ps1

here.cmd

powershell -ExecutionPolicy Unrestricted -NoExit -Command " &{import-module .\scripts\psake.psm1; Invoke-Psake -docs}"

psake.ps1

import-module .\scripts\psake.psm1

Start-Transcript -Path .\install.log
invoke-psake @args
Stop-Transcript

remove-module psake

deploy.ps1

$framework = '3.5x64'
Properties {
  $base_dir = resolve-path .
    $solution = "MySolution.wsp"
    $application = $null
    $path = "$base_dir\wsp\$solution"
    $SolutionFileName = ""
}

Task Deploy -Depends Solution-Setup, Solution-UnInstall, Solution-Remove, Solution-Add, Solution-Install, Solution-TearDown

Task Solution-Remove -Depends Solution-Wait{
  Write-Host 'Remove solution'
  Remove-SPSolution ñidentity $solution -confirm:$false
}

Task Solution-Add {
  Write-Host 'Add solution'
  Add-SPSolution -LiteralPath $path
}

Task Solution-Install {
  Write-Host 'install solution'
  if($application -eq $null)
  {
      Install-SPSolution -identity $solution -GACDeployment -force
  }
  else
  {
      Install-SPSolution -identity $solution -application $application -GACDeployment -force
  }
}

Task Solution-UnInstall {
  Write-Host 'uninstall solution'
  if($application -eq $null)
  {
     Uninstall-SPSolution -identity $solution -confirm:$false
  }
  else
  {
     Uninstall-SPSolution -identity $solution -application $application -confirm:$false
  }  
}

Task Solution-Setup {
  Add-PsSnapin Microsoft.SharePoint.PowerShell
}

Task Solution-TearDown -Depends Solution-Wait {
  Remove-PsSnapin Microsoft.SharePoint.PowerShell
}

Task Solution-Wait {
  $JobName = "*solution-deployment*$SolutionFileName*"
    $job = Get-SPTimerJob | ?{ $_.Name -like $JobName }
    if ($job -eq $null) 
    {
        Write-Host 'Timer job not found'
    }
    else
    {
        $JobFullName = $job.Name
        Write-Host -NoNewLine "Waiting to finish job $JobFullName"

        while ((Get-SPTimerJob $JobFullName) -ne $null) 
        {
            Write-Host -NoNewLine .
            Start-Sleep -Seconds 2
        }
        Write-Host  "Finished waiting for job.."
    }
}

install.ps1

Task Install -Depends Logging-Start, Deploy, Logging-Stop

Task Logging-Start {
  Start-Transcript -Path .\install.log
}

Task Logging-Stop {
  Stop-Transcript
}
Categories: Uncategorized Tags: , , ,

sharepoint-deployment-in-enterprise-environment

November 26th, 2010 2 comments

SharePoint Installations: Strategy to aid continuous integration

This document outlines an approach to managing SharePoint installations. Starting from the current deployment approach which is primarily manual, an improved approach is outlined that manages the difference between deployments and migrations and aid the automation of scripting through all environments.

Overview

For continuous integration (CI), we generally want the approach to move a single package for a SharePoint solution through all environments. Secondarily, we want to be scripted and preferably automated. To put the foundations in place we have had to devise a clear strategy. Microsoft does not provide all the tools and techniques for SharePoint continuous integration. In fact, in most cases for new features such as customisation of site page or workflow creation must be performed manually in each environment.

Shortcomings of SharePoint deployments: no real versioning across environments

SharePoint has a well established set of techniques for deployment. Most easily, there are through the GUI approaches that come with Visual Studio or are able to added into Visual Studio. There are other tools that aid deployments through a GUI in a wizard style (on CKS: Development Tools Edition on Codeplex). Alternatively, you can deploy solutions using stsadm and PowerShell cmdlets. Finally, you can create your own custom deployment steps (IDeploymentStep) and deployment configurations (ISharePointProjectExtension) by implementing the core SharePoint library and add these to your solutions and through the deployment build lifecycle. This is all well explained in SharePoint 2010 Development with Visual Studio by Addison Wesley.

All of this allows for custom deployment and retraction of a SharePoint solution with bias toward GUI-based deployment and certainly manual configuration of sites. Manual configuration is particularly problematic in the enterprise environment that wishes to use continuous integration.

There is a missing piece of the puzzle that could aid automation in the moving of solutions through environments. This is the idea of migrations. Migrations are simply a convenient way for you to alter the SharePoint site in a structured and organised manner. So while you could activate features through the GUI by telling operations or administrators you would then be responsible for writing documentation. You’d also need to keep track of changes of which changes need to be run against each environment next time you deploy. Instead, migrations work against the SharePoint API and are versioned. Each SharePoint instance has a record of which version migrations have been run. It applies on new migrations to the site.

Current Approach: without migrations and bias toward GUI

Table 1 outlines the current installation approach is scripted deployments via powershell. Provisioning occurs as part of the SharePoint. However, there is also a lot more configuration required that is currently performed manually through the GUI. This leads to problems in installations across multiple environments. It is also costly in terms of time for training, documentation and problem solving.

Deployment

Provisioning

Configuration

Scripted

GUI

Automated

Manual

Table 1:             Previous State for SharePoint installations

In contrast, Table 2 splits deployment into deployment and migrations and has a bias towards scripting and automation over GUI and manual configuration.


Deployment

Migrations

Provisioning

Configuration

Scripted

GUI

Automated

Manual

Table 2:             New State for SharePoint installations

Approach With Migrations and bias toward scripting

We approach this problem by isolating the installation of sharepoint solutions through provisioning and configuration phases as per Table 3. Provisioning occurs when a new “feature” is deployed. This occurs in the Event Receiver code callbacks.

Configuration is actually an implicit concept in SharePoint. Activation is post-provisioning and can occur on features once provisioned. Configuration on the other hand is the main activity of site administrators in SharePoint – this is the gluing together of the system. For most SharePoint installations, these two activities are usually performed through the GUI by people. As such, this is not conducive to enterprise systems where new code and configurations need to be moved through many environments.


Provisioning

Configuration

Table 3:             Installation process split between provisioning and
configuration

Table 4 outlines the difference between sripted tasks and GUI-based tasks and that there should be a preference toward scripting. Scripted tasks tend to be written in C# using SharePoint API to perform tasks (or as cmdlets). They merely reproduce tasks that would normally be done in the GUI. In most cases, the functionality is the same. However, in a percentage of cases we find that each approach has its own quirks that need to be ironed out. This is very useful for finding issues early on.


Provisioning

Configuration

Scripted

GUI

Table 4:             Tasks should be scripted more than through the GUI

Table 5 outlines the distinction between automated and manual tasks. To aid maintainability through environments tasks must be scripted for continuous integration automation. However, in practice not all tasks should be exclusively automated. Many scripted tasks must also be able to be run within continuous builds and be run manually.


Provisioning

Configuration

Scripted

GUI

Automated

Manual

Table 5:             Automation over Manual – but manual still required

Finally in Table 6, for all of this work we there have split the installation process into two approaches: deployments and migrations. Deployments are scripted in powershell (and msbuild) and have the ability to compile, package and deploy the SharePoint solutions. Deployments trigger provisioning in the SharePoint system by design of SharePoint. What is then missing is the ability to (re)configure the system per deployment. Migrations do this job. They apply a set of changes per installation – they are a little more powerful than this. They can actually apply set of changes in order per installation and keep these versioned. Moreover they can work transactionally and retract changes upon errors (and this can be either automated or done manually). It is based on a technique used for database schema changes – in fact, we have extended an open source library to do the job for us.

Deployments also extend the simplistic notion of only needing a wsp for deployment and include creating zip packages, extracting and deploying solutions via powershell and SharePoint API. The result is not only the standard deployment: wsp uploaded into the solution store; binaries are deployed. But also, as part of the deployment, the powershell scripts may also provision solutions that are usually provisioned through the GUI by the administrator. This will often be a prerequisite for a migration.

Migrations are versioned actions that run after the deployment. Migrations may activate, install/upgrade or configure a feature (or act upon the 14 hive). Many of these tasks have previously been performed through the GUI.

Deployment

Migrations

Provisioning

Configuration

Scripted

GUI

Automated

Manual

Table 6:             Installation is now completed via deployment and migrations

Changes in the code base

Having this strategy means some changes in the code base. Here are some of things we see:

Migration Classes: We now have the configuration code implemented as part of the code base – this includes the ability to make configurations roll forward or backwards

Automation: Configuration can now be performed by build agents

Migration Framework: We have used migratordotnet to perform the migration but did have to extend the library to cater for SharePoint

PowerShell scripts: migrations can be run either automatically as part of installations or run independently by operations – we have also used psake as a build DSL around PowerShell

So far, we haven’t included any code samples. These are to come.

Categories: Uncategorized Tags: ,