Archive for November, 2010

Test Strategy in SharePoint: Part 2 – good layering to aid testability

November 28th, 2010 No comments

Test Strategy in SharePoint: Part 2 – good layering to aid testability

Overall goal: write maintainable code over code that is easy to write (Freeman and Price, 2010)

In Part 1 – testing poor layering is not good TDD I have argued that we need to find better ways to think about testing SharePoint wiring code that do not confuse unit and integration tests. In this post, I explain that outline a layering strategy for solutions resolving this problem. Rather than only one project for code and one for tests, I use 3 projects for tests and 4 for the code – this strategy is based on DDD layering and the test automation pyramid.

  • DDD layering projects: Domain, Infrastructure, Application and UI
  • Test projects: System, Integration and Unit

Note: this entry does not give code samples – the next post will – but focuses how the projects are organised within the Visual Studio solution, how they are sequenced when programming. I’ve included a task sheet that we use in Sprint Planning that we use a boiler plate list to mix and match scope of features. Finally, I have a general rave on the need for disciplined test-first and test-last development.


Here’s the quick overview of the layers, take a look further down for fuller overview.

  • Domain: the project which has representations of the application domain and have no references to other libraries (particularly SharePoint)
  • *Infrastructure: *this project references Domain and has the technology specific implementations. In this case, it has all the SharePoint API implementations
  • Application: this project is a very light orchestration layer. It is a way to get logic out of the UI layer to make it testable. Currently, we actually put all our javascript jQuery widgets in the project (that I will post about later because we unit (BDD-style) all our javascript and thus need to keep it away from the UI)
  • UI: this is the wiring code for SharePoint but has little else – this will more sense once you can see that we Integration test all SharePoint API and this code goes to Infrastructure and that we unit test any models, services or validation and that we put these in Domain. For example, with Event Receivers code in methods is rarely longer than a line or two long.

Test Projects

  • System Acceptance Test: Business focused test that describes the system – these tests should live long term reasonably unchanged
  • System Smoke Test: Tests that can run in any environment that confirm that it is up and running
  • Integration Test: Tests that have 1 dependency and 1 interaction that are usually against third-party API and in this case mainly the SharePoint API - these may create scenarios on each method
  • Unit Test: Tests that have no dependencies (or are mocked out) – model tests, validations, service tests, exception handling

Solution Structure

Below is the source folder of code in the source repository (ie not lib/, scripts/, tools/). The solution file (.sln) lives in the src/ folder.

Taking a look below, we see our 4 layers with 3 tests projects. In this sample layout, I have include folders which suggest that we have code around the provisioning and configuration of the site for deployment – see here for description of our installation strategy. These functional areas exists across multiple projects: they have definitions in the Domain, implementation in the Infrastructure and both unit and integration tests.

I have also included Logging because central to any productivity gains in SharePoint is to use logging and avoid using a debugger. We now rarely attach a debugger for development. And if we do it is not our first tactic as was the previous case.

You may also notice Migrations/ in Infrastructure. These are the migrations that we use with migratordotnet.

Finally, the UI layer should look familiar and this is a subset of folders.





Writing code in our layers in practice

The cadence of the developers work is also based on this separation. It generally looks like this:

  1. write acceptance tests (eg given/when/then)
  2. begin coding with tests
  3. sometimes starting with Unit tests – eg new Features, or jQuery widgets
  4. in practice, because it is SharePoint, move into integration tests to isolate the API task
  5. complete the acceptance tests
  6. write documentation of SharePoint process via screen shots

We also have a task sheet for estimation (for sprint planning) that is based around this cadence.

Task Estimation for story in Scrum around SharePoint feature

A note on test stability

Before I finish this post and start showing some code, I just want to point out that getting stable deployments and stable tests requires discipline. The key issues to allow for are the usual suspects:

  • start scripted deployment as early as possible
  • deploy with scripts as often as possible, if not all the time
  • try to never deploy or configure through the GUI
  • if you are going to require a migration (GUI-based configuration) script it early because while it is faster to do through the GUI this is a developer-level (local) optimisation for efficiency and won’t help with stabilisation in the medium term
  • unit tests are easy to keep stable – if they aren’t then you are serious in trouble
  • integration tests are likely to be hard to keep stable – ensure that you have the correct setup/teardown lifecycle and that you can fairly assume that the system is clean
  • as per any test, make sure integration tests are not dependent on other tests (this is standard stuff)
  • system smoke tests should run immediately after an installation and should be able to be run in any environment at any time
  • system smoke tests should not be destruction precisely because they are run in any environment including production to check that everything is working
  • system smoke tests shouldn’t manage setup/teardown because they are non-destructive
  • system smoke tests should be fast to run and fail
  • get all these tests running on the build server asap

Test-first and test-last development

TDD does not need to be exclusively test-first development. I want to suggest that different layer require different strategies but most importantly there is a consistency to the strategy to help establish cadence. This cadence is going to reduce transaction costs – knowing when done, quality assurance for coverage, moving code out of development. Above I outlined writing code in practice: acceptance test writing, unit, integration and then acceptance test completion.

To do this I test-last acceptance tests. This means that as developers we write BDD style user story (give/when/then) acceptance tests. While this is written first, it rarely is test-driven because we might not then actually implement the story directly (although sometimes we do). Rather we park it. Then we move into the implementation which is encompassed by the user story but we then move into classical unit test assertion mode in unit and integration tests. Now, there is a piece of code that it clearly unit testable (models, validation, services) this is completed test first – and we pair it, we use Resharper support to code outside-in. We may also need to create data access code (ie SharePoint code) and this is created with integration tests. Interestingly, because it is SharePoint we break many rules. I don’t want devs to write Infrastructure code test last but often we need to spike the API. So, we actually spike the code in the Integration test and then refactor to the Infrastructure as quickly as possible. I think that this approach is slow and that we would be best to go to test-first but at this stage we are still getting a handle on good Infrastructure code to wrap the SharePoint API. The main point is that we don’t have untested code in Infrastructure (or Infrastructure code lurking in the UI). These integration tests in my view are test-last in most cases simply because we aren’t driving design from the tests.

At this stage, we have unfinished system acceptance tests, code in the domain and infrastructure (all tested). What we then do is hook the acceptance test code up. We do this instead of hooking up the UI because then we don’t kid ourselves whether or not the correct abstraction has been created. In hooking up the acceptance tests, we can simply hook up in the UI. However, the reverse has often not been the case. Nonetheless, the most important issue that we have hooked up our Domain/Infrastructure code by two clients (acceptance and UI) and this tends to prove that we have a maintainable level of abstraction for the current functionality/complexity. This approach is akin to when you have a problem and you go to multiple people to talk about it. By the time you have had multiple perspectives, you tend to get clarity about the issues. Similarly, in allowing our code to have multiple conversations in the form of client libraries consume them, we know the sorts of issues are code are going have – and hopefully, because it is software, we have refactored the big ones out (ie we can live the level of cohesion and coupling for now).

I suspect for framework or even line of business applications, and SharePoint being one of many, we should live with the test-first and test-last tension. Test-first is a deep conversation that in my view covers off so many of the issues. However, like life, these conversations are not always the best to be had every time. But for the important issues, they will always need to be had and I prefer to have them early and often.

None of this means that individual developers get to choose which parts get test-first and test-last. It requires discipline to use the same sequencing for each feature. This takes time for developers to learn and leadership to encourage (actually, enforce, review and refine). I am finding that team members can learn the rules of the particular code base in between 4-8 weeks if that is any help.

Test Strategy in SharePoint: Part 1 – testing poor layering is not good TDD

November 27th, 2010 2 comments

Test Strategy in SharePoint: Part 1 – testing poor layering is not good TDD

Overall goal: write maintainable code over code that is easy to write (Freeman and Price, 2010)

My review of SharePoint and TDD indicated that we need to use layering techniques to isolate SharePoint and not to confuse unit tests with integration tests. Let’s now dive into what is the nature of the code we write in SharePoint.

This post of one of two posts. This first one will review two existing code samples that demonstrate testing practices: SharePoint Magic 8 Ball & Unit Testing SharePoint Foundation with Microsoft Pex and Moles: Tutorial for Writing Isolated Unit Tests for SharePoint Foundation Applications. I argue against using these practices because this type of mocking encourages poor code design through exclusively using unit tests and not isolating approach integration tests. In the second post, I will cover examples that demonstrate layering and separation of unit and integration tests, including how to structure Visual Studio solutions.

What is the nature of SharePoint coding practice?

At this stage, I am not looking at “customisation” code (ie WebParts) but rather “extension” code practices where we effectively configure up the new site.

  1. programming code is often “wiring” deployment code (eg files mostly of xml) that then needs to be glued together via .net code against the SharePoint API.
  2. resulting code is problematically procedural in nature and untestable outside its deployment environment
  3. this makes its error prone and slow
  4. it also encourages developers to work in large batches, across longer than needed time frames and provide slow feedback
  5. problems tend to occur in later environments

A good summary how the nature of a SharePoint solution may not lend itself to classical TDD:

So, now lets take a typical SharePoint project. Of course there is a range and gamut of SharePoint projects, but lets pick the average summation of them all. Thus in a typical SharePoint project, the portion where TDD is actually applicable is very small – which is the writing code part. In most, not all, SharePoint projects, we write code as small bandaids across the system, to cover that last mile – we don’t write code to build the entire platform, in fact the emphasis is to write as little code as possible, while meeting the requirements. So already, the applicability of TDD as a total percentage of the project is much smaller. Now, lets look at the code we write for SharePoint. These small bandaids that can be independent of each other, are comprised of some C#/VB.NET code, but a large portion of the code is XML files. These large portion of XML files, especially the most complex parts, define the UI – something TDD is not good at testing anyway. Yes I know attempts have been made, but attempt != standard. And the parts that are even pure C#, we deal with an API which does not lend itself well to TDD. You can TDD SharePoint code, but it’s just much harder.

What we found when stuck to the procedural, wiring up techniques is:

  1. write, repetitive long blocks of code
  2. funky behaviours in SharePoint
  3. long periods of time sitting around watching the progress bar in the browser
  4. disagreement and confusion between (knowledgable) developers on what they should be doing and what was happening
  5. block copy inheritance (cut-and-paste) within and between visual studio solution
  6. no automated testing

Looking at that problem we have formulated the charitable position that standard SharePoint techniques allow you to write easy code – or more correctly, allow you to copy and paste others code and hack it to your needs. We decided that we would try to write maintainable code. This type of code is layered and SOLID

Okay so where are the code samples for us to follow?

There are two main samples to look at and both try and deal with the SharePoint API. Let me cover these off first before I show where we went.

SharePoint Magic 8 Ball

SharePoint Magic 8 Ball is some sample source code from Best Practices SharePoint conference in 2009. This a nice piece of code to demonstrate both how to abstract away SharePoint and how not to worry about unit testing SharePoint.

Let me try and explain. The functionality of the code is that you ask a question of a “magic” ball and get a response – the response is actually just picking an “answer” randomly from a list. The design is a simple one in two parts: (1) the list provider for all the answers and (2) the picker (aka the Ball). The list is retrieved from SharePoint and the picker takes a list from somewhere.

The PickerTest (aka the ball)

 public class BallPickerTest
     public void ShouldReturnAnAnswerWhenAskedQuestion()
         var ball = new Ball(); 
         answer = ball.AskQuestion("Will it work?");


And the Ball class

public class Ball
    public List<string> Answers { get; set }
    public Ball()
        Answers = new List<string>{"yes"};

    public string AskQuestion(string p)
        Random random = new Random();
        int item = random.Next(Answers.Count);
        return Answers[item];

That’s straightforward. So then the SharePoint class gets the persisted list.

public class SharePoint
    public List<string> GetAnswersFromList(string ListName)
        List<String> answers = new List<string>();
        SPList list = SPContext.Current.Web.Lists[ListName];
        foreach (SPListItem item in list.Items)
        return answers;

Here’s the test that shows how you can mock out SharePoint. In this case, the tester is using TypeMock. Personally, what I am going to argue is that isn’t an appropriate test. You can do it, but I wouldn’t bother. I’ll come back to how I would rather write an integration test.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NUnit.Framework;
using TypeMock.ArrangeActAssert;
using Microsoft.SharePoint;

namespace BPC.Magic8Ball.Test
    public class SharePointTest
        public void ShouldReturnListOfAnswersFromSharePointList()
              SPList fakeList = Isolate.Fake.Instance<SPList>();
              Isolate.WhenCalled(() => SPContext.Current.Web.Lists["ListName"]).WillReturn(fakeList);
              SPListItem fakeAnswerOne = Isolate.Fake.Instance<SPListItem>();
              Isolate.WhenCalled(() => fakeAnswerOne.Title).WillReturn("AnswerZero");
              Isolate.WhenCalled(() => fakeList.Items).WillReturnCollectionValuesOf(new List<SPItem>{fakeAnswerOne });                              
              var answers = new SharePoint().GetAnswersFromList("ListName");

              Assert.AreEqual("AnswerOne", answers.First());

In the code above, it nicely demonstrates that TypeMock can mock out a static class that lives somewhere else. Put differently, you don’t have use dependency injection patterns to do your mocking. I want to argue that this is a poor example because this is not a practical unit test – and I’m sure that if I wasn’t lazy I could find the test smell here already documented in xUnit Test Patterns by Gerald Mezaros. Least the naming of the classes!

The key problem is that sample project is that the real test here is that there is good decoupling between the Picker (the ball) and the answers (sharepoint list). If there is any mocking to go on, it should be that the answers (sharepoint list) is actually invoked. If this the case, then this also points to a potential code smell and that is that the dependency of the answer list (either as List or SharePoint) is documented clearly. In other words, it might want to be passed in into the constructor. Now you might even argue that the ball should have a property for the answers but rather a reference to the SharePoint (answer getter). This may seem a small point but are important because we need designs that scale – and decoupling has been proven to do so.

So, instead of this code:

var ball = new Ball();
var sharePoint = new SharePoint();

ball.Answers = sharePoint.GetAnswersFromList(ListName);
var answer = ball.AskQuestion("MyQuestion");

You are more likely to have if hand in the list:

var ball = new Ball(new SharePoint.GetAnswersFromList(ListName));
ball.Answers = sharePoint.GetAnswersFromList("Your answer");
var answer = ball.AskQuestion("MyQuestion");

Or if the SharePoint is a dependency:

var ball = new Ball(new SharePoint(ListName));
ball.Answers = sharePoint.GetAnswersFromList("Your answer");
var answer = ball.AskQuestion("MyQuestion");

What is at the heart of this sample is that it is trying to unit test everything. Instead it should split tests into unit and integration tests. I have documented elsewhere very specific definitions of unit and integration tests:

  • Unit – a test that has no dependencies (do our objects do the right thing, are they convenient to work with?)
  • Integration – a test that has only one dependency and tests one interaction (usually, does our code work against code that we can’t change?)

What would I do?

  • unit test the ball – there are a number of tests around the AskQuestion() method – does the randomness work, can I hand it a list of answers. If I am going to hand in the SharePoint class – I can then can handin a fakeSharePoint class for stubs and then mock the class to check that it actually calls the GetAnswersFromList() method.
  • integration test the SharePoint class – the integration test requires that SharePoint is up and running. This is a great integration test because it checks that you have all your ducks lined up for a SPContext call and that you have used the correct part of the API. Interestingly, the harder part of the integration tests are getting the setup context correct. Having to deal with the Setup (and Teardown) now is in fact one of the most important benefits of these types of integration tests. We need to remember that these integration APIs merely confirm that the API is behaving as we would expect in the context in which we are currently working. No more. No less.

In summary, this sample showing particular layering in the code but not in the testing. Layering your tests is as important. In SharePoint, we have had the most success in creating abstractions exclusively for SharePoint and testing these as integration tests. These SharePoint abstractions have also been created in a way that we can hand in mock implementations into other layers so that we can unit test the parts of the code that have logic. This is a simple design that effectively makes SharePoint just another persistence-type layer. There is a little more to it than that but that isn’t for here.

Let’s turn to Pex and Moles and see if it provides an option any better to TypeMock. I suspect not because the issue here isn’t the design of the test system – these both can intercept classes hidden and used somewhere else with my own delegates – but rather the design of our code. I’m hoping to use tools that help me write better more maintainable code – so far both of them look like the “easy code” smell.

Pex and Moles

So I’ve headed off to Unit Testing SharePoint Foundation with Microsoft Pex and Moles: Tutorial for Writing Isolated Unit Tests for SharePoint Foundation Applications. It is a great document that tells us why SharePoint is not a framework built with testability baked in.

The Unit Testing Challenge. The primary goal of unit testing is to take the smallest piece of testable software in your application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Unit testing has proven its value, because a large percentage of defects are identified during its use.
The most common approach to isolate production code in a unit test requires you to write drivers to simulate a call into that code and create stubs to simulate the functionality of classes used by the production code. This can be tedious for developers, and might cause unit testing to be placed at a lower priority in your testing strategy.

It is especially difficult to create unit tests for SharePoint Foundation applications because:

* You cannot execute the functions of the underlying SharePoint Object Model without being connected to a live SharePoint Server.

* The SharePoint Object Model-including classes such as SPSite and SPWeb-does not allow you to inject fake service implementations, because most of the SharePoint Object Model classes are sealed types with non-public constructors.

Unit Testing for SharePoint Foundation: Pex and Moles. This tutorial introduces you to processes and concepts for testing applications created with SharePoint Foundation, using:

* The Moles framework – a testing framework that allows you to isolate .NET code by replacing any method with your own delegate, bypassing any hard-coded dependencies in the .NET code.

* Microsoft Pex – an automated testing tool that exercises all the code paths in a .NET code, identifies potential issues, and automatically generates a test suite that covers corner cases.

Microsoft Pex and the Moles framework help you overcome the difficulty and other barriers to unit testing applications for SharePoint Foundation, so that you can prioritize unit testing in your strategy to reap the benefits of greater defect detection in your development cycle.

The Pex and Moles sample code works through this example. It shows you how to mock out all the dependencies required from this quite typical piece of code. Taking a quick look we have these dependencies:

  • SPSite returning a SPWeb
  • Lists on web
  • then GetItemById on Lists returning an SPListItem as item
  • SystemUpdate on item
public void UpdateTitle(SPItemEventProperties properties) 
	using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) 
		SPList list = web.Lists[properties.ListId]; 
		SPListItem item = list.GetItemById(properties.ListItemId); 
		item["Title"] = item["ContentType"]; 

Here’s a quick sample to give you a feeling if don’t want to read the pdf. In this code, the first thing to know is that we use a “M” prefix by convention to intercept classes. In this case, MSPSite intercepts the SPSite and then returns a MSPWeb on which we override the Lists and return a MSPListCollection.

string url = "http://someURL"; 

MSPSite.ConstructorString = (site, _url) => 
	new MSPSite(site) 
		OpenWeb = () => new MSPWeb
			Dispose = () => { }, 
			ListsGet = () => new MSPListCollection 


There is no doubt that Pex and Moles is up to the job of mocking these out. Go and look at the article yourself. Others have commented on getting use to the syntax and I agree that because I am unfamiliar it is not as easy as say Moq. But it seems similar to MSpec. That’s not my gripe. My grip is that the tool helps bake badness into my design. Take the example where the sample adds validation logic into the above code, the authors seem to think that this is okay.

public void UpdateTitle(SPItemEventProperties properties) 
	using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) 
		SPList list = web.Lists[properties.ListId]; 
		SPListItem item = list.GetItemById(properties.ListItemId); 
		string content = (string)item["ContentType"]; 
		if (content.Length < 5) 
			throw new ArgumentException("too short"); 
		if (content.Length > 60) 
			throw new ArgumentOutOfRangeException("too long"); 
		if (content.Contains("\r\n")) 
			throw new ArgumentException("no new lines"); 
		item["Title"] = content; 


In the sample, described as “a more realistic example to test”, it just encourages poor separation when writing line of business applications. This is validation logic and there is no reason whatsoever to test validation alongside the need for data connections. Furthermore, there is little separation of concerns around exception throwing and handling and logging.

I’m therefore still frustrated that we are being shown easy to write code. Both Pex and Moles and TypeMock (regardless of how cool or great they are) are solutions for easy to write code. In this sample, we should see get separation of concerns. There are models, there are validators, there is the checking for validations and error handling. All these concerns can be written as unit tests and with standard libraries.

We will then also need integration tests to check that we can get list items. But these can be abstracted into other classes and importantly other layers. If we do that we will also avoid the duplication of code that we currently see with the using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) code block. I am going to return to this in another post which is less about layering but about some specific SharePoint tactics once we have a layering strategy in place.

Categories: Uncategorized Tags: , ,

SharePoint deployment packaging

November 27th, 2010 No comments

In this blog post I discuss a strategy for adding migrations and scripting to the deployment of SharePoint solutions. What was implicit in their was that we were packing the SharePoint solutions (wsp file) into a release package that had scripting. I documented here the types of scripts we used in order to get automated deployments.

Taking a step back, the reasons we went down this approach was that in practice:

  1. in the first sprint, we lost 2.5 days times 3 developers worth of work alone in SharePoint API (funky-ness: eg poor error reporting, API bugs)
  2. developers are often waiting long periods of time for deployments
  3. when we did do “quick” deploys they were unreliable (ie pushing into the 14 hive), say with a javascript deployment (eg CKS.Dev.Server.vsix)

To do this, we had a take step back from delivering functions and delivering stability. This actually took us about a month to get under control – I wish I could send a particular corporation the bill for an under-finished product.

Therefore, beware. If you are reading this and this is new to you, you are likely to need to build in an initial theme over the first month or so: Stability of code base and its deployment. This is of course not a problem exclusive to SharePoint.


  • map out the value stream for deployment through environment, identify pain points (result: scripting but not automation)
  • start scripted deployments
  • solve instability issues


  • versioned release packages – see below for the packing.proj msbuild tasks
  • scripted deployments
  • migrations as part of deployment
  • general rule of no


  1. a standard deployment now takes between 1-3 mins (each scripted installation reports time taken)
  2. the biggest lag is still the bootstrapping process that SharePoint undergoes with a new deployment


This packaging project is an adaption on the sample deployment project found here. In that sample, you’ll see the need for dependencies in the lib/ folder (such as 7zip for compression). Down in the script you might also notice that we include migratordotnet for managing releases which actually requires original binaries – I need to get back to this issue and try not to included binaries like this.

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="HelpBuild"  ToolsVersion="3.5" xmlns="">

    <Revision Condition="'$(Revision)'==''">0</Revision>
    <Version Condition="'$(Version)'==''">$(Major).$(Minor).$(Build).$(Revision)</Version>
    <DropLocation Condition="'$(DropLocation)'==''">$(MSBuildProjectDirectory)\CodeToDeploy\Publish</DropLocation>
    <BuildCmd Condition="'$(BuildCmd)'==''">ReBuild</BuildCmd>
    <ReleaseEnvironment Condition="'$(ReleaseEnvironment)'==''">Test</ReleaseEnvironment>

    <WspRoot Condition="'$(WspOutDir)'==''">$(BinariesRoot)</WspRoot>



    <Description>Build Releasable MySites</Description>

    <SolutionToBuild Include="src\Site.sln">
    <WspToBuild Include="$(WspRoot)\UI.csproj">
    <WspFiles Include="$(BinariesRoot)\**\*\*.wsp" />

    <TasksFiles Include="scripts\deploy.ps1" />
    <TasksFiles Include="scripts\deploy\migrations.ps1;
                         scripts\deploy\install.ps1" />
    <MigrationFiles Include="$(WspRoot)\bin\x64\Release\Infrastructure.dll;
                             $(WspRoot)\bin\x64\Release\Domain.dll" /> 
    <MigratorFiles Include="$(LibRoot)\migratordotnet\*" /> 

  <Target Name="Package" DependsOnTargets="Clean;Version-Writeable;Version;Compile;Version-Reset;Version-ReadOnly;Publish;Zip"/>
  <Target Name="Install" DependsOnTargets="Package;Extract;Deploy"/>

  <Target Name="Compile">
    <MSBuild Projects="@(SolutionToBuild)" Targets="Rebuild" />
    <CallTarget Targets="PackageWsp" />

  <Target Name="PackageWsp">
    <MSBuild Projects="@(WspToBuild)" Targets="ReBuild;Package" />

  <Target Name="Publish">
    <MakeDir Directories="$(DropLocationWsp)" Condition = "!Exists('$(DropLocationWsp)')" />
    <Copy SourceFiles="@(WspFiles)" DestinationFolder ="$(DropLocationWsp)" SkipUnchangedFiles="true"/>
    <Copy SourceFiles="@(TasksFiles)" DestinationFolder ="$(DropLocation)\scripts\%(TasksFiles.RecursiveDir)" SkipUnchangedFiles="true" />
    <Copy SourceFiles="@(MigrationFiles)" DestinationFolder ="$(DropLocation)\lib\%(MigrationFiles.RecursiveDir)" SkipUnchangedFiles="true" />
    <Copy SourceFiles="@(MigratorFiles)" DestinationFolder ="$(DropLocation)\lib\migratordotnet\%(MigratorFiles.RecursiveDir)" SkipUnchangedFiles="true" />
    <Exec Command="echo .> &quot;$(DropLocation)\$(Version)&quot;"/>
    <Exec Command="echo powershell -ExecutionPolicy Unrestricted -Command &quot; &amp;{import-module .\scripts\psake.psm1; Invoke-Psake Install}&quot; > &quot;$(DropLocation)\Install.bat&quot;"/>
    <Exec Command="echo powershell -ExecutionPolicy Unrestricted -NoExit -Command &quot; &amp;{import-module .\scripts\psake.psm1; Invoke-Psake -docs}&quot; > &quot;$(DropLocation)\here.cmd&quot;"/>  
    <Exec Command="echo Include .\scripts\migrations.ps1; Include .\scripts\deploy.ps1; Include .\scripts\install.ps1 > &quot;$(DropLocation)\default.ps1&quot;"/>  

  <Target Name="Clean" DependsOnTargets="CleanPublish;CleanReleases" >
    <RemoveDir Directories="$(DropLocation)\.." />

  <Target Name="CleanPublish">
    <RemoveDir Directories="$(DropLocation)" />

  <Target Name="CleanReleases">
    <RemoveDir Directories="$(ReleasePath)" />
    <RemoveDir Directories="$(ExtractPath)" />

  <Target Name="Zip">
    <MakeDir Directories="$(ReleasePath)" Condition = "!Exists('$(ReleasePath)')" />
    <Exec Command="$(Zip) a -tzip %22$(ReleaseZipFile)%22" WorkingDirectory="$(DropLocation)"/>

  <Target Name="Extract">
    <MakeDir Directories="$(ExtractPath)"  Condition = "!Exists('$(ExtractPath)')"/>
    <Exec Command="$(Zip) x %22$(ReleaseZipFile)%22 -o$(ExtractPath)" WorkingDirectory="$(DropLocation)"/>

  <Target Name="Deploy">
    <Exec Command="$(ExtractPath)\deploy.bat $(ExtractPath)"  WorkingDirectory="$(ExtractPath)" ContinueOnError="false" />

  <Target Name="HelpBuild">
    <Message Text="

    msbuild /t:Package

      msbuild /t:Install
      msbuild /t:Install /p:ReleaseEnvironment=Test
      msbuild /t:Package /v:d 

    Variables that can be overridden:

     - Compile
     - Clean
     - CleanReleases
     - Publish
     - Zip
     - Extract
     - Deploy
     - PackageWsp

     - Package (Compile;Clean;Publish;Zip)
     - Install (Compile;Clean;Publish;Zip;Extract;Deploy)

    Log output: msbuild.log
             " />

Categories: Uncategorized Tags: , ,


November 26th, 2010 No comments

PowerShell scripting approach for SharePoint (via psake)

This is a simple plea to SharePointers. Can we please have a decent build runner around PowerShell scripts in the deployment environment. Most developers in other frameworks do. So I thought that I would outline some basic techniques. psake is as good as any other options (except rake which I still think is vastly superior but I don’t want the operational hassle of needing it in other environments). Here’s what I’m doing:

  • powershell wrapped in psake
  • modularised scripts
  • psake helpers scripts for the real world (and documented in other places)
  • helper scripts for each of access to command line powershell

One caveat, psake has a couple of limitations that make it otherwise a good rake port:

  • it doesn’t namespace (AFAIK) tasks and that means it effectively works in the global namespace – this means we get conflicts and have to resort to poor naming practices
  • it doesn’t create good documentation for nested scripts – that seriously sucks – I want to be able to do -docs and read what I can do
  • it really doesn’t do logging well when moving across scripts (Start-Transcript seems to suck – but that might be my knowledge)

If you don’t go down this road what you end up with is a load of powershell scripts that we double click on. That’s not my idea of well structured or maintainable. Skip straight down to deploy.ps1 if want to see what the scripts look like rather than all the plumbing. And, yes, there is way too much plumbing! Here is the script where I actually do the packaging up of these files.

File layout


Quick explanation:

  • psake1.ps1, psake.psm1 – both come with psake. psm is a module and the make DSL engine – go there to understand what to do. psake1 is a help wrapper.
  • default.ps1 is a manifest that links all your psake tasks – psake looks for default.ps1 by, well, default.
  • deploy.cmd is a GUI-based wrapper that invoke a specific tasks (deploy in this case)
  • here.cmd brings up a command line interface so that I can use all the psake tasks (eg Invoke-psake Deploy)
  • deploy.ps1, install.ps1 are my two specific sets of tasks – deploy has SharePoint cmdlets linked in here (install is a further wrapper that does logging and in my own case also invokes other scripts I haven’t included like migrations)

How to use

Option One:

  1. Click deploy.cmd

Option Two:

  1. Click here.cmd – this will list out the available commands
  2. Be able to type in powershell commands
Invoke-psake Deploy



powershell -ExecutionPolicy Unrestricted -NoExit -Command "&{import-module .\scripts\psake.psm1; Invoke-Psake Install}"


Include .\scripts\deploy.ps1; 
Include .\scripts\install.ps1


powershell -ExecutionPolicy Unrestricted -NoExit -Command " &{import-module .\scripts\psake.psm1; Invoke-Psake -docs}"


import-module .\scripts\psake.psm1

Start-Transcript -Path .\install.log
invoke-psake @args

remove-module psake


$framework = '3.5x64'
Properties {
  $base_dir = resolve-path .
    $solution = "MySolution.wsp"
    $application = $null
    $path = "$base_dir\wsp\$solution"
    $SolutionFileName = ""

Task Deploy -Depends Solution-Setup, Solution-UnInstall, Solution-Remove, Solution-Add, Solution-Install, Solution-TearDown

Task Solution-Remove -Depends Solution-Wait{
  Write-Host 'Remove solution'
  Remove-SPSolution ñidentity $solution -confirm:$false

Task Solution-Add {
  Write-Host 'Add solution'
  Add-SPSolution -LiteralPath $path

Task Solution-Install {
  Write-Host 'install solution'
  if($application -eq $null)
      Install-SPSolution -identity $solution -GACDeployment -force
      Install-SPSolution -identity $solution -application $application -GACDeployment -force

Task Solution-UnInstall {
  Write-Host 'uninstall solution'
  if($application -eq $null)
     Uninstall-SPSolution -identity $solution -confirm:$false
     Uninstall-SPSolution -identity $solution -application $application -confirm:$false

Task Solution-Setup {
  Add-PsSnapin Microsoft.SharePoint.PowerShell

Task Solution-TearDown -Depends Solution-Wait {
  Remove-PsSnapin Microsoft.SharePoint.PowerShell

Task Solution-Wait {
  $JobName = "*solution-deployment*$SolutionFileName*"
    $job = Get-SPTimerJob | ?{ $_.Name -like $JobName }
    if ($job -eq $null) 
        Write-Host 'Timer job not found'
        $JobFullName = $job.Name
        Write-Host -NoNewLine "Waiting to finish job $JobFullName"

        while ((Get-SPTimerJob $JobFullName) -ne $null) 
            Write-Host -NoNewLine .
            Start-Sleep -Seconds 2
        Write-Host  "Finished waiting for job.."


Task Install -Depends Logging-Start, Deploy, Logging-Stop

Task Logging-Start {
  Start-Transcript -Path .\install.log

Task Logging-Stop {
Categories: Uncategorized Tags: , , ,


November 26th, 2010 2 comments

SharePoint Installations: Strategy to aid continuous integration

This document outlines an approach to managing SharePoint installations. Starting from the current deployment approach which is primarily manual, an improved approach is outlined that manages the difference between deployments and migrations and aid the automation of scripting through all environments.


For continuous integration (CI), we generally want the approach to move a single package for a SharePoint solution through all environments. Secondarily, we want to be scripted and preferably automated. To put the foundations in place we have had to devise a clear strategy. Microsoft does not provide all the tools and techniques for SharePoint continuous integration. In fact, in most cases for new features such as customisation of site page or workflow creation must be performed manually in each environment.

Shortcomings of SharePoint deployments: no real versioning across environments

SharePoint has a well established set of techniques for deployment. Most easily, there are through the GUI approaches that come with Visual Studio or are able to added into Visual Studio. There are other tools that aid deployments through a GUI in a wizard style (on CKS: Development Tools Edition on Codeplex). Alternatively, you can deploy solutions using stsadm and PowerShell cmdlets. Finally, you can create your own custom deployment steps (IDeploymentStep) and deployment configurations (ISharePointProjectExtension) by implementing the core SharePoint library and add these to your solutions and through the deployment build lifecycle. This is all well explained in SharePoint 2010 Development with Visual Studio by Addison Wesley.

All of this allows for custom deployment and retraction of a SharePoint solution with bias toward GUI-based deployment and certainly manual configuration of sites. Manual configuration is particularly problematic in the enterprise environment that wishes to use continuous integration.

There is a missing piece of the puzzle that could aid automation in the moving of solutions through environments. This is the idea of migrations. Migrations are simply a convenient way for you to alter the SharePoint site in a structured and organised manner. So while you could activate features through the GUI by telling operations or administrators you would then be responsible for writing documentation. You’d also need to keep track of changes of which changes need to be run against each environment next time you deploy. Instead, migrations work against the SharePoint API and are versioned. Each SharePoint instance has a record of which version migrations have been run. It applies on new migrations to the site.

Current Approach: without migrations and bias toward GUI

Table 1 outlines the current installation approach is scripted deployments via powershell. Provisioning occurs as part of the SharePoint. However, there is also a lot more configuration required that is currently performed manually through the GUI. This leads to problems in installations across multiple environments. It is also costly in terms of time for training, documentation and problem solving.








Table 1:             Previous State for SharePoint installations

In contrast, Table 2 splits deployment into deployment and migrations and has a bias towards scripting and automation over GUI and manual configuration.









Table 2:             New State for SharePoint installations

Approach With Migrations and bias toward scripting

We approach this problem by isolating the installation of sharepoint solutions through provisioning and configuration phases as per Table 3. Provisioning occurs when a new “feature” is deployed. This occurs in the Event Receiver code callbacks.

Configuration is actually an implicit concept in SharePoint. Activation is post-provisioning and can occur on features once provisioned. Configuration on the other hand is the main activity of site administrators in SharePoint – this is the gluing together of the system. For most SharePoint installations, these two activities are usually performed through the GUI by people. As such, this is not conducive to enterprise systems where new code and configurations need to be moved through many environments.



Table 3:             Installation process split between provisioning and

Table 4 outlines the difference between sripted tasks and GUI-based tasks and that there should be a preference toward scripting. Scripted tasks tend to be written in C# using SharePoint API to perform tasks (or as cmdlets). They merely reproduce tasks that would normally be done in the GUI. In most cases, the functionality is the same. However, in a percentage of cases we find that each approach has its own quirks that need to be ironed out. This is very useful for finding issues early on.





Table 4:             Tasks should be scripted more than through the GUI

Table 5 outlines the distinction between automated and manual tasks. To aid maintainability through environments tasks must be scripted for continuous integration automation. However, in practice not all tasks should be exclusively automated. Many scripted tasks must also be able to be run within continuous builds and be run manually.







Table 5:             Automation over Manual – but manual still required

Finally in Table 6, for all of this work we there have split the installation process into two approaches: deployments and migrations. Deployments are scripted in powershell (and msbuild) and have the ability to compile, package and deploy the SharePoint solutions. Deployments trigger provisioning in the SharePoint system by design of SharePoint. What is then missing is the ability to (re)configure the system per deployment. Migrations do this job. They apply a set of changes per installation – they are a little more powerful than this. They can actually apply set of changes in order per installation and keep these versioned. Moreover they can work transactionally and retract changes upon errors (and this can be either automated or done manually). It is based on a technique used for database schema changes – in fact, we have extended an open source library to do the job for us.

Deployments also extend the simplistic notion of only needing a wsp for deployment and include creating zip packages, extracting and deploying solutions via powershell and SharePoint API. The result is not only the standard deployment: wsp uploaded into the solution store; binaries are deployed. But also, as part of the deployment, the powershell scripts may also provision solutions that are usually provisioned through the GUI by the administrator. This will often be a prerequisite for a migration.

Migrations are versioned actions that run after the deployment. Migrations may activate, install/upgrade or configure a feature (or act upon the 14 hive). Many of these tasks have previously been performed through the GUI.









Table 6:             Installation is now completed via deployment and migrations

Changes in the code base

Having this strategy means some changes in the code base. Here are some of things we see:

Migration Classes: We now have the configuration code implemented as part of the code base – this includes the ability to make configurations roll forward or backwards

Automation: Configuration can now be performed by build agents

Migration Framework: We have used migratordotnet to perform the migration but did have to extend the library to cater for SharePoint

PowerShell scripts: migrations can be run either automatically as part of installations or run independently by operations – we have also used psake as a build DSL around PowerShell

So far, we haven’t included any code samples. These are to come.

Categories: Uncategorized Tags: ,

SharePoint 2010 books

November 26th, 2010 No comments

I must have trawled through 10 SharePoints 2010 books of late from various publishers Sams, Wrox, Microsoft, Addison Wesley and Apress. I have been asking the question, how do I go about thinking about programming this beast? I want the 50 foot object model and the approach to programming – so testing would be an added bonus. Interestingly, very few do this. In fact, most are made for the person that wants to configure up a system rather than do development. Fair enough, I suppose. Clearly, few people are programming up sites – not. I suppose I can keep going to the SDK or blogs, but again, I haven’t been able to get the right view from these. I have been working with Devs who really know their API. But they too have been struggling to explain to me SharePoint. I have worked with various CMS systems over the years and would have thought that I would have picked it up easier. Oh, well perhaps it is me!

If you are new to doing SharePoint development then I thing that I think that I have learned is that you wanting books with a focus on “Solution” development rather than ones with “user” or even simply “development”. In fact, I haven’t even looked at the ones on Adminstration, Designer or Workflow. Try this search on Amazon

A quick run down.

My pick:

Followed by:

There’s the ones who tell you all about the governance, planning and implementation (rather than development)

Reasonably helpful ones:

Sorry, I haven’t quite got to Real World SharePoint® 2010 by Wrox. I’m hoping its better.

Here’s one that I going to try and get a copy to read:

Categories: Uncategorized Tags:

SharePoint TDD Review

November 26th, 2010 No comments

I’ve been looking around what others have writing about testing SharePoint and before I do a post on how I have gone about it here’s a quick review. It seems there are the usual positions:

  • don’t do it because it doesn’t add value and is difficult given the state of the product, community, developers and product owners
  • don’t do it because it unnecessarily expands the code base
  • do it using mocking frameworks (TypeMock or Pex/Moles)
  • don’t mock because there are too many dependencies and you really need to test in the environment and just testing using units in this way is misleading
  • do it but don’t mock out what are actually integration tests – so appropriately layer your code and tests

So where I got to is that:

  • we need to write SharePoint using layering (SOLID)
  • we need to put an abstraction in front of SharePoint in our code (interfaces or late-bound lambdas – or both)
  • we need to write SharePoint API code in ways that are still accessible to developers who are at home in the SharePoint world – so the code that uses SharePoint is altogether (in practice, this would look like driving the API calls out of WebParts, Event Receivers and into their own classes – think ports and adaptors technique)
  • we need to layer our tests also (unit and integration – see my posts on test automation pyramid)
  • use mocks in unit tests – so no dependencies (which requires abstractions)
  • don’t mock (SharePoint) in integration tests (steel thread/walking skeletons will require this but that around getting code to work that isolates SharePoint)
  • you will be required to use DI (but that doesn’t mean IOC) – this will be easy if you layer your code and follow the rule in hand in dependencies on constructors

So writing good (layered and testable) code is the core issue. Or in the words of Freeman and Price, let’s write maintainable code over code that is easy to write. SharePoint examples are all about the marketing machine showing just how easy it is to use. That, though, isn’t maintainable code from the TDD perspective.

Here’s my most interesting, immediate experience in unit testing SharePoint – we couldn’t use MSTest. I kid you not. It wouldn’t work on 64-bit machines. We found others had had the problem. Go figure.

General case against:

Basic argument that I have a lot of sympathy for (except if I have to be the developer who puts his name against the system and has to do support because this position gives me no protection). This comment shows a really good understanding of the complexities.

So, now lets take a typical SharePoint project. Of course there is a range and gamut of SharePoint projects, but lets pick the average summation of them all. A usual SharePoint project involves many things,

  • Customer communication and prototyping – usually much much more than a typical .NET custom dev project.
  • Plenty of attention to the IT Pro side, where you decide logical and physical layouts of your servers, networks, and your site collections. Custom dev .NET projects are usually simpler to setup and plan.
  • A significant effort in branding – usually greater than your typical .NET project, simply because the master pages and CSS and JS of SharePoint are much more complex.
  • Focus on scalability, when balancing between requirements, and best practices, and experience!
  • Writing some code
  • Establishing roles within your team, which is different from the nature of the SharePoint project. Usually this is more involved in an SP project than a custom .NET dev project, simply because there is a higher overlap between ITPro and Dev in the case of SP.
  • Training required – your business users will need training on the project ~ again, usually more significant than a simple .NET custom dev project.
  • Architecture, and functionality bargaining per RUP for COTS - where there are many ways of achieving a goal, and the way picked is a delicate balance between budget, demands, and technology.

Thus in a typical SharePoint project, the portion where TDD is actually applicable is very small – which is the writing code part. In most, not all, SharePoint projects, we write code as small bandaids across the system, to cover that last mile – we don’t write code to build the entire platform, in fact the emphasis is to write as little code as possible, while meeting the requirements. So already, the applicability of TDD as a total percentage of the project is much smaller. Now, lets look at the code we write for SharePoint. These small bandaids that can be independent of each other, are comprised of some C#/VB.NET code, but a large portion of the code is XML files. These large portion of XML files, especially the most complex parts, define the UI – something TDD is not good at testing anyway. Yes I know attempts have been made, but attempt But in most scenarios, the answer is usually no. So, TDD + SharePoint 2007? Well – screw it! Not worth it IMO.

This is supported in Designing solutions for microsoft® sharepoint® 2010 (p.229)

SharePoint developers have traditionally been slow to adopt modern approaches to soft- ware testing. There are various reasons for this. First, solution development for Microsoft® SharePoint® is often a specialized, fairly solitary process that isn’t automated or streamlined to the same degree as mainstream software development. Second, the ASP. NET programming model that underpins SharePoint development does not naturally lend itself to automated testing. Finally, most of the classes and methods in the SharePoint API do not implement interfaces or override virtual methods, making them difficult to “stub out” using conventional mocking techniques. However, automated, robust testing is an increasingly essential part of enterprise-scale SharePoint development.


From Comment 5 at 2/18/2009 6:56 PM

When I’m doing SharePoint development the last thing I think of (or one of the last) is the SharePoint infrastructure. For me, there are two key things that I always do. First, I build an abstraction layer over SP. I treat SP as a service and talk to it on my terms, not theirs. Second, I have a second set of integration tests that really a not that important to the main thrust of the TDD work I do and that’s of testing the SP interactions. There’s two levels of testing here, a) does my code retrieve what I want from SP and give me the results I want and b) does SP do what I think it should. It’s like doing a write-read confirmation of a SQL call vs. making sure SQL is setup correctly and the tables/fields are there.

From Peter at 2/18/2009 9:12 PM

one of the things Bob Martin mentioned was that you can’t test all your code. You have a core set of logic that you test heavily, hopefully getting 100% coverage of this CORE logic, and a volatile set of code, which you shove away in an untested cubbyhole. In my mind, our small SharePoint projects are almost all volatile code, with very little “core logic.”

In Designing solutions for microsoft sharepoint 2010 they recommend:

Testability. Can you test your classes in isolation? If your code is tightly coupled to a user interface, or relies on specific types, this can be challenging.
Flexibility. Can you update or replace dependencies without editing and recompiling your code?
Configuration. How do you manage configuration settings for your solution? Will your approach scale out to an enterprise-scale deployment environment?
Logging and exception handling. How do you log exceptions and trace information in the enterprise environment? Is your approach consistent with that of other developers on the team? Are you providing system administrators with reliable information that they can use to diagnose problems effectively?
Maintainability. How easy is it to maintain your code in a code base that is constantly evolving? Do you have to rewrite your code if a dependent class is updated or replaced?

Different types of tests

Probably the best SharePoint based explanation is in Designing solutions for microsoft sharepoint 2010. Here I will reproduce in full the explanations of unit and integration testing then the final part which backs up my experience that you can’t do integration testing using MSTest (although this is apparently fixed in Visual Studio 2010 SP1 Beta):

Unit Testing
Unit tests are automated procedures that verify whether an isolated piece of code behaves as expected in response to a specific input. Unit tests are usually created by developers and are typically written against public methods and interfaces. Each unit test should focus on testing a single aspect of the code under test; therefore, it should generally not contain any branching logic. In test-driven development scenarios, you create unit tests before you code a particular method. You can run the unit tests repeatedly as you add code to the method. Your task is complete when your code passes all of its unit tests.
A unit test isolates the code under test from all external dependencies, such as external APIs, systems, and services. There are various patterns and tools you can use to ensure that your classes and methods can be isolated in this way, and they are discussed later in this section.
Unit tests should verify that the code under test responds as expected to both normal and exceptional conditions. Unit tests can also provide a way to test responses to error conditions that are hard to generate on demand in real systems, such as hardware failures and out-of-memory exceptions. Because unit tests are isolated from external dependencies, they run very quickly; it is typical for a large suite consisting of hundreds of unit tests to run in a matter of seconds. The speed of execution is critical when you are using an iterative approach to development, because the developer should run the test suite on a regular basis during the development process.
Unit tests make it easier to exercise all code paths in branching logic. They do this by simulating conditions that are difficult to produce on real systems in order to drive all paths through the code. This leads to fewer production bugs, which are often costly to the business in terms of the resulting downtime, instability, and the effort required to create, test, and apply production patches.

Integration Testing
While unit tests verify the functionality of a piece of code in isolation, integration tests verify the functionality of a piece of code against a target system or platform. Just like unit tests, integration tests are automated procedures that run within a testing framework. Although comprehensive unit testing verifies that your code behaves as expected in isolation, you still need to ensure that your code behaves as expected in its target environment, and that the external systems on which your code depends behave as anticipated. That is the role of integration testing.
Unlike a unit test, an integration test executes all code in the call path for each method under test-regardless of whether that code is within the class you are testing or is part of an external API. Because of this, it typically takes longer to set up the test conditions for an integration test. For example, you may need to create users and groups or add lists and list items. Integration tests also take considerably longer to run. However, unlike unit tests, integration tests do not rely on assumptions about the behavior of external systems and services. As a result, integration tests may detect bugs that are missed by unit tests.
Developers often use integration tests to verify that external dependencies, such as Web services, behave as expected, or to test code with a heavy reliance on external dependencies that cannot be factored out. Testers often also develop and use integration tests for more diverse scenarios, such as security testing and stress testing.
In many cases, organizations do not distinguish between integration and unit testing, because both types of tests are typically driven by unit testing frameworks such as nUnit, xUnit, and Microsoft Visual Studio Unit Test. Organizations that employ agile development practices do, however, make this distinction, since the two types of tests have different purposes within the agile process.

Note: In the Visual Studio 2010 release, there is a limitation that prevents you from running integration tests against SharePoint assemblies using Visual Studio Unit Test. Unit tests created for Visual Studio Unit Test must be developed using the Microsoft . NET Framework 4.0 in Visual Studio 2010, whereas SharePoint 2010 assemblies are based on .NET Framework 3.5. In many cases, this is not an issue-.NET Framework 4.0 assemblies are generally compatible with .NET Framework 3.5 assemblies, so you can run a .NET Framework 4.0 test against a .NET Framework 3.5 assembly. However, the way in which SharePoint loads the .NET common language runtime (CLR) prevents the runtime from properly loading and running the tests within Visual Studio Unit Test.
This limitation prevents you from running integration tests with SharePoint within Visual Studio Unit Test. Integration tests execute real SharePoint API logic instead of substituting the logic with a test implementation. Two isolation tools discussed in the following sections, TypeMock and Moles, will continue to work because they intercept calls to the SharePoint API before the actual SharePoint logic is invoked. You can execute integration tests using a third-party framework such as xUnit or nUnit. Coded user interface (UI) tests against SharePoint applications will run without any issues from within Visual Studio 2010.

Against Mocking:

From Eric Shupps at 2/18/2009 7:30 PM

While I agree that mocked objects do have value, I believe that the overall value is much less in an environment like SharePoint. You absolutely cannot duplicate the functionality of the core system, no matter how hard you try – reflection only takes you so far. Most of what Isolator does has no bearing on what actually happens in a live system. Fake objects are fine in tightly controlled environments but they just don’t do the trick in SharePoint. I do, however, encourage you to keep innovating and producing better and better tools for the development community.

For Mocking via IOC:

From Paul at 2/19/2009 12:11 AM

Let me give a real code example. Take this:

private Product[] GetProductsOnSpecialFromList(string url, string listName)
       var results = new List&lt;Product>();
       var webProvider = IOC.Resolve&lt;IWebSiteProvider>();
       var list = webProvider.FindWeb(url).FindList(listName);
       foreach (var item in list)
              // Add an item to products list
              // Only include products in the InStock state
              // Only include products which are on special
              // More and more business rules
       return results;

What do we have here? Business logic. That’s the kind of stuff I want to do TDD against. But it talks to SharePoint? Well, it talks to an interface. Behind the interface could be SharePoint, or it could be a mock object. I don’t care. It’s called “loose coupling”. Can I test it? Absolutely. And it’s dead easy. I can test all kinds of permutations on my business logic. In fact, I can even tell what happens when a URL is invalid or a site is offline – my mock IWebProvider would thrown an exception, and I’d test that my business logic responded appropriately. Now, eventually, behind that interface, you might want to test SharePoint. For that, you could make a decision – do I write an integration test, or do I not care? Either way, that code should only account for 5% of your code base. You can do TDD for the other 95%. In short, I suspect you find testing hard because you write tightly-coupled, hard to maintain code that mixes business logic/behavior with SharePoint crap. If you wrote loosly coupled code, you’d find testing easy.




General discussion on TDD around SharePoint

Sample Code:

Categories: Uncategorized Tags: , ,