Response to programming is not a craft

January 17th, 2011 No comments

I have been trying to keep up with the discussion around Dan North’s post but being on holiday makes it difficult. Luckily the slowness to respond allows me to not write somethings because Dan has follow up with this. As he points out in his critical feedback section that correct definitions were not really the point. When I had started making some notes I felt that there were problems in his blog because there were disparate assertions but I there were at least these:

  • Programming is not a craft.
  • Programming requires skill
  • Programming entry has been “democratised”
  • Good programming requires expertise and experience
  • Poor programming is still dominant
  • Good programmers are not being rewarded
  • Programmers still aren’t part of a profession
  • Programming should be a means-to-an-end (value) and not an end-in-itself (the program)
  • A programmer is not an artist
  • Programmers that act as artists are a pain
  • programming is best thought of as a trade

All good points. I’m pretty sure most of these are in Software Craftsmanship by Pete McBreen though So, I’m less worried about making a clear distinction between programming-as-craft or programming-as-trade? Why? Because I think it can be circumvented with arguments by Richard Sennett in The Craftsman. Firstly, Sennet does think that the computer programmer is potentially a craftsperson, albeit a contemporary one – “Developments in high technology reflect an ancient model for craftsmanship”. Laurie Taylor writes:

Craft, insists Sennett, is as important in modern society as it ever was in the medieval guilds and it is not simply to be found in the work of such traditional craftspeople as silversmiths, carpenters and potters. It can also be seen in the scientific laboratory (the equivalent of the old workshops) or in the work of software developers .. Craftsmanship for Sennett is “an enduring, basic human impulse, the desire to do a job well for its own sake.” But, as he shows, in today’s labour market, “doing good work is not a guarantee of good fortune. In work, as in politics, sharks and incompetents have no trouble succeeding.”

If you want a quick introduction on craftwork and skill with Sennett there is an interview with him here.

Who is a craftsperson?

In his book, he defines the craftsperson based on five aspects:

  • Manual skills to produce things
  • To do things for their own sake
  • A level of freedom to experiment
  • Reward of competence
  • Within the context of a community to meet standards

So, really, do I use manual skills to produce things? How manual is typing really? I woodwork and play instruments and have a sense that the manual skills typing requires is similar both similar and different. Touch typing requires practice and to be able to anticipate code while typing tends to require me to not need to be thinking about typing at the same time. But, I don’t have to be able to touch type to be a skilled programmer – I might indeed not even need to type if I can do entry in another way. So typing is probably different from being able to feel the grain of wood while planing or releasing keys on a instrument. Sennett argument is far more subtle. He argues that the hand informs the work of the mind and that advanced hand technique might inform technical skills. Under a theme of tempo and anticipation, you should not be overly conscious of your hands as you loose the insight of anticipation. If you are interested in all this there is a wonderful description in the David Sudnow’s, Ways of the Hand, as he describes learning jazz piano.

On being troubled …

Then comes the next three items that in the world of line-of-business applications get more difficult. Personally, I find that can satisfy these in my projects. Yet, I have had many (and I worked for NZ largest software house) who argued you couldn’t. Working within that environment left me troubled partly because I found I was working on projects that didn’t satisfy the last item. Sennett has argued that the contemporary craftsperson is likely to be troubled and in three ways key ways:

Motivation

Sennett writes:

Developments in high technology reflect an ancient model for craftsmanship, the reality on the ground is that people who aspire to being good craftsmen are depressed ignored, or misunderstood by social institutions. These ills are complicated because few institutions set out to produce unhappy workers. People seek refuge in inwardness when material engagement proves empty; mental anticipation is privileged above concrete encounter; standards of quality in work separate design from execution

Developing Skills

Some notes here are:

  • Wary of ideas that skills are innate
  • They are practiced and repeated
  • Skill development is based on how repetition is organised

So the problem in project-based work is developing skills outside the immediate task that would in fact help inside it – TDD is good example of this (of many).

Conflicting measures of quality: correctness and functionality

What do we mean by good-quality work? One is how something should be done, the other is getting it to work. This is a difference between correctness and functionality. Ideally, there should be no conflict. In the real world, there is. Often we subscribe to a standard of correctness that is rarely if ever reached. We might alternatively work according to the standard of what is possible, just good enough – but this can be a recipe for frustration. The desire to do good work is seldom satisfied by just getting by.

It seems to me that Sennett covers a lot of the bases found in the posts around the place. There’s also a wonder section of developing skills. I would take the time to read and digest this piece of work. It is far more insightful than say the Dreyfus model of skill acquisition – which has anyone remembered Dreyfus himself said was merely anecdotal?

Categories: Uncategorized Tags:

My HTC Hero with CynaogenMod for Froyo (with iPhone aspects)

January 6th, 2011 No comments

I was wondering why it took me a few days to get on top of the configuration for my Android HTC Hero running CyanogenMod (and to give it the benefits of some iPhone features). So, I’ve made a list of the main apps, widgets and bits’n'pieces that I’ve downloaded.

At the bottom I’ve started a list of some configurations too.

Apps

General

KeyPassDroid
Barcode Scanner
SNTP Client
Telnet
Terminal Emulator
Contact2Sim

Settings

Locale
Locale cell plugin
Locale power source
Locale GPS Plugin
Local SmartNight
Locale Wifi Connection Plugin
Slide it Keyboard
Better Keyboard
Gmail Unread Count
Handcent SMS

Connectivity

KeepWifi
3G Watchdoy
EasySMS
PdaNet

Communication

Skype
Twitter
TweetDeck
Peep
Facebook
Yammer
Droidin
SipDroid
Smooth Calendar

Skinning for iPhone look and feel alike

iPhone skin
ADW MacOS Theme
iPhone sounds
Better Keyboard
Slide it keyboard

Travel

mPass
CheckMyTrip

Accounts

Expensify

Configurations

Dock

I want to use all the space at the bottom of the screen which initially had 3 items and upped it 5

  1. Goto Settings > ADWLauncher > UI Settings > Main Dock Style > 5
  2. Drag items over the dock items to add or replace. To create custom icons before dragging, from the home: Menu > Add > Custom shortcut > Pick your activity > Touch the Icon > ADWTheme IconPacks

Widget (pull down for Wifi, )

I am always having problems with wifi, mobile data, syncing, etc. There are two options here. You can have them on a screen using the PowerTool widget or you can pull down on the notification bar. I chose the later.

  1. Settings > CyanogenMod Settings > User Interface > Widget Buttons (Then pick what you want to see)
Categories: Uncategorized Tags:

Upgrading my HTC Hero to Android Froyo (2.2)

December 28th, 2010 No comments

I have an HTC Hero from Europe that is running Android 2.1 update 1 (OTA). This upgrade, including reinstalling apps was about 6 hours work. I’m pleased with the upgrade and like some of the new features including voice recognition. I only backed up my sdcard and not messages or contacts on the phone (I keep mine elsewhere than the phone and a few on the SIM). I am finding that rotating the handset is way more responsive.

Overview:

  • Backup
  • Root the phone
  • Create a Recovery ROM via RA-Hero
  • Install Custom ROM from cyanogenmod
  • Update the Radio
  • Reinstall Apps from Market

Backing up

  1. Plug in your phone via USB cable and mount the sd card so we can transfer files from it.
  2. Copy these onto your hard drive
  3. Unplug the phone.

Root the phone

http://theunlockr.com/2010/09/27/how-to-root-the-htc-hero-androot-method/

Gain Root Access with AndRoot (Note: I didn’t need to goldcard it)

  1. Download AndRoot and save it to your computer.
  2. Plug in your phone via USB cable and mount the sd card so we can transfer files to it.
  3. Copy the Universal Androot.apk file to the root of the sd card (NOT in any folders, just on the sd card itself).
  4. Unplug the phone.
  5. Go to the Market on your phone and download Linda File Manager (it’s free, search for Linda).
  6. Open Linda and look on the sd card for the Universal Androot.apk file and select it to install. Follow the prompts to install it with the Package Installer and allow unknown sources (Note: if it half installs then open the drive and delete the .Universal Androot.apk file and try again)
  7. Open Androot and click Go Root. Wait until it says “Woot! Your device is rooted!” and exit the program.

Create my Recovery ROM

I have used RA-Hero rather than ClockWorkmod ROM Manager (because it didn’t work).

  1. Download Recovery.img and save it to your computer
  2. Plug in your phone via USB cable and mount the sd card so we can transfer files to it.
  3. Copy the file and rename to recovery.img file to the root of the sd card (NOT in any folders, just on the sd card itself).
  4. Unplug the phone.
  5. Download and install Android Terminal Emulator from the Market (use Better Terminal instead if this one doesn’t work)
  6. Run “Terminal Emulator”. Type (press and hold menu to get the keyboard up) in the following:
    @su@ (press enter and wait for the “Super User Request Prompt”. Choose “Allow” and make sure its ticked to Allow every time.)
    flash_image recovery /sdcard/recovery.img (press enter – make sure this is typed exactly as seen)
  7. Turn off your phone and press Home + Power to start it up again. This should boot into recovery mode.

Installing custom rom

Looking for the lastest ROMs, I downloaded:

  1. Plug in your phone via USB cable and mount the sd card so we can transfer files to it.
  2. Copy these to the root of the sd card (NOT in any folders, just on the sd card itself).
  3. Unplug the phone.

Instructions here

  1. Make sure your phone is in recovery mode (Home + Power)
  2. Select Backup/Restore > Nand backup > confirm with Home > Wait until it says “Backup complete!”
  3. Press Back to get to the main menu

Wiping

  1. Select Wipe
  2. Wipe data/factory reset > Home
  3. Wipe cache > Home
  4. Wipe Dalvik-cache > Home
  5. Wipe SD:ext partition > Home

Loading custom ROM

  1. Press Back to get to the main menu
  2. Flash zip from sdcard
  3. Select update-cm-6.1.0-Hero-signed.zip > Home

Loading Google apps

  1. Flash zip from sdcard
  2. Select gapps-mdpi-20101020-signed.zip > Home

Reboot

  1. Once it is done, select Reboot and you will boot into the new Custom ROM.

I found that it took 2-5 minutes to reboot and start again. I had thought that I had got something wrong – ah, patience.

Update Radio

I didn’t need to do this but I have. From here I downloaded the lastest:

  • update-hero-radio-63.18.55.06SU_6.35.17.03-signed.zip
  • Plug in your phone via USB cable and mount the sd card so we can transfer files to it.
  • Copy the file to the root of the sd card and rename radio.zip.
  • Unplug the phone
  • Start Terminal Emulator
  • md5sum /sdcard/radio.zip and compare with the checksum back here
  • Turn off
  • Make sure your phone is in recovery mode (Home + Power)
  • Flash zip from sdcard
  • Select radio.zip > Home
  • Once it is done, select Reboot and you will boot the new radio – mine looked liked it broke – ah, patience again.

Never pull the battery when flashing a radio, it reboots itself during the process!

Update Apps from Market

This is just the standard updates from Market. I noticed that it remember paid for apps but not my others. Now, there’s an app that I would like. This soaked up a good hour or two.

Other things I noticed was that I didn’t like the wall papers compared with HTC Sense and the general skins of the widgets aren’t as nice either. Functionally, I think they are the same.

Categories: Uncategorized Tags: ,

SharePoint TDD Series: Maintainability over Ease

December 16th, 2010 No comments

This series is part of my wider initiative around the Test Automation Pyramid. Previously I have written around Asp.Net MVC. This series will outline a layered code and test strategy in SharePoint.

SharePoint is a large and powerful system. It can cause problems in the enterprise environment incurring delays, cost and general frustration. Below is an overview of the main areas of innovation made in the source code to mitigate these problems. These problems are because the fundamental design of SharePoint is to design an “easy” system to code. It is easy in this sense is a system that can be configured up by general pool of developers and non-developers a like. Such a design however does not necessarily make the system maintainable. Extension, testability and stability may all suffer. In enterprise environments these last qualities are equally if not more important to the long-term value of software.

This series of posts outlines both the code used and the reasons behind it usage. As such it is a work in progress that will need to be referred to and updated as the code base itself changes.

Deployment Changes

Layered Code with testing

Testing on Event Receiver via declarative attributes

Testing Delegate controls which deploy jQuery

  • Part 5 – Client side strategies for javascript
  • Part 6 – Unit testing the jQuery client-side code without deploying to SharePoint
  • Part 2 – Unit testing the delegate control that houses the jQuery
  • Part 4 – Exploratory testing without automation is probably good enough

Cross-cutting concerns abstractions

Test Strategy in SharePoint: Part 4 – Event Receiver as layered Feature

December 12th, 2010 No comments

In Test Strategy in SharePoint: Part 3 – Event Receiver as procedural, untestable feature we came up with some nice code that we believe to be nicer – the code was going to be layered and more testable. This entry will look at the tests and the code to make that code come alive.

The starting design – which may get tweaked a little as we go. In practice, we started with this design in mind (ie use attributes to declarative decide on what to provision) but refactored the original code test-first trying to work out what was unit vs integration testable and what code was in the domain, infrastructure and ui layers (as based on good layering to aid testability). We won’t take you through that process but it did only take a few hours to knock out the first attempt.

Here we’ll take you through the creation of one type of the page the NewPage. We have put the others there to show that we made the design because we are going to require many types of pages and we are hoping that the benefit of an attribute and its declarative style will payoff against its cost. We are looking for accessibility and maintainability as we bring on new developers – or come back to it ourselves in a couple of weeks!

using System.Runtime.InteropServices;
using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Infrastructure;
using YourCompany.SharePoint.Infrastructure.Configuration;
using Microsoft.Office.Server.UserProfiles;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;

namespace YourCompany.SharePoint.MySites.Features.WebFeatureProvisioner
{
    [Guid("cb9c03cd-6349-4a1c-8872-1b5032932a04")]
    public class SiteFeatureEventReceiver : SPFeatureReceiver
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        [ActivateFeature(PackageCfg.PublishingWebFeature)]
        [RemovePage("Pages/default.aspx")]
        [MasterPage("CustomV4.master", MasterPage.MasterPageType.User)]
        [MasterPage("CustomMySite.master", MasterPage.MasterPageType.Host)]
        public override void FeatureActivated(SPFeatureReceiverProperties properties)
        {
            PersonalSiteProvisioner
              .Create(properties.Feature.Parent as SPWeb)
              .Process();
        }
    }
} 

Overview of strategy

I want to drive as much testing back into unit tests and the rest can go to integration testing. However, there is another issue here. As a team member, I actually want to have a record of the operational part of the tests because this is about installation/deployment and it is at the stage of provisioning/activation. So what we’ll need to do is write a BDD-style acceptance test to tease out the feature activation process too. Thus:

  • the acceptance test will have the ability to actually activate a feature
  • the unit test should help specify any specifics of this activation – which is the abstraction mechanism we will use to get code abstractions
  • the integration test be any tests proving specific API calls

System Acceptance test

Before we try and drive out a design, let’s understand what needs to be done. To write this acceptance test requires a good knowledge of SharePoint provisioning – so these are technically-focussed acceptance tests rather than business one’s.

We will write a system test Acceptance\Provisioning\ (we will implement this in StoryQ later on)

Story is Solution Deployment

In order to create new pages for user
As a user
I want a 'MySites' available

With scenario have a new feature
  Given I have a new wsp package mysites.wsp
  When site is deployed
    And I am on site http://mysites/personal
  Then Publishing Site Feature is site activated

Design

We now start to drive out some code concepts from tests. I think that our TODO list is something like this:

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Unit test: have an attribute

Because this is the first test, I will add in the namespace that we are driving out code from the domain and in the unit test project. This code is creating an attribute that we can to represent a new page. This is straightforward code that we want to be able to simple say what names of the pages are that we want to be in the new page.

namespace Test.Unit.Provisioning
{
    [TestFixture]
    public class SiteProvisioningAttributesTests
    {
        [Test]
        public void ProvisioningHasHomePage()
        {
            Assert.IsTrue(typeof(TestClass).GetMethod("OneNewPage").GetCustomAttributes(typeof(NewPageAttribute), false).Count() == 1);
        }

        [Test]
        public void CanReturnPageValues()
        {
            var page = ((IProvisioningAttribute)typeof(TestClass).GetMethod("OneNewPage").GetCustomAttributes(typeof(NewPageAttribute), false)).Page;
            Assert.AreEqual("Home.aspx", page.Name);
        }

        public class TestClass
        {
            [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
            public void OneNewPage()
            {

            }
        }
    }
} 

Now the skeleton code for the attribute – it will need to do provisioning later on but let’s leave that for now. At this stage, we are just going to make the attribute be able to return the value object of a page:

using System;
using YourCompany.SharePoint.Domain.Model;
using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Domain.Services.Provisioning;

namespace YourCompany.SharePoint.Infrastructure.Provisioning
{
    [Serializable, AttributeUsage(AttributeTargets.Method, Inherited = false, AllowMultiple = true)]
    public class NewPageAttribute : Attribute, IProvisioningAttribute
    {
        public Page Page { get; private set; }
        public NewPageAttribute(string name, string title, string pageLayout)
        {
            Page = new Page(name, title, pageLayout);
        }
    }
} 
public struct Page
{
    public string Name { get; private set; }
    public string Title { get; private set; }
    public string PageLayout { get; private set; }
    
    public Page(string name, string title, string pageLayout)
        : this()
    {
        Name = name;
        Title = title;
        PageLayout = pageLayout;
    }
}s
public interface IProvisioningAttribute
{
    Page Page { get; }
}

Unit test: be able to read the attributes from a class

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Now that we have an attribute designed that can return the values for a page, now we need to create a parser across the class that can return the attributes.


[TestFixture]
public class SiteProvisioningAttributesTests
{
    [Test]
    public void NewProvisioningCountIsOneForNewPageMethod()
    {
        var pages = AttributeParser.Parse<NewPageAttribute>(typeof(TestClass), "OneNewPage");
        Assert.IsTrue(pages.Count() == 1);
    }

    [Test]
    public void NewProvisioningCountIsTwoForNewPageMethod()
    {
        var pages = AttributeParser.Parse<NewPageAttribute>(Type, "TwoNewPages");
        Assert.IsTrue(pages.Count() == 2);
    }

    public class TestClass
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        public void OneNewPage()
        {
        }

        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        [NewPage("Home2.aspx", "Home2 Page", "HomePage2.aspx")]
        public void TwoNewPages()
        {
        }
    }
}

With code something like this we are going to get all the New pages.

public class AttributeParser
{
    public static IEnumerable<Page> Parse<T>(Type type, string method)
        where T : IProvisioningAttribute
    {
            return type.GetMethod(method).GetCustomAttributes(typeof(T), false)
                .Select(attribute => ((T)attribute).Page));
    }
} 

Now that we are return a page by iterating over the the method, I can now see that we don’t want a page per se but rather a specific type of publisher.

Unit tests: return a publisher for a page rather than a page

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Returning a publisher is going to be refactor as we are adding a new concept through the attribute. Let’s rewrite the test that our parser is going to instead return an explicit IPagePublisher rather than an implicit Page.

[TestFixture]
public class SiteProvisioningAttributesTests
{
    [Test]
    public void NewProvisioningCountIsOneForNewPageMethod()
    {
        var pages = AttributeParser.Parse<IPagePublisher, NewPageAttribute>(typeof(TestClass), "OneNewPage");
        Assert.IsTrue(pages.Count() == 1);
    }

    public class TestClass
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        public void OneNewPage()
        {
        }
    }
}

As we unpick this small change, we get the introduction of a new interface IPagePublisher. In practice, I add an empty interface but here I want to show that we introduce the concept of publishing without actual implementations – this is the benefit that we are looking for. In summary, I give the activator a “new page” with details and it provides back to me a publisher that knows how to deal with this information. That sounds fair to me.

public interface IPagePublisher
{
    void Add();
    void Delete();
    void CheckIn();
    void CheckOut();
    void Publish();
    bool IsPublished();
    bool IsProvisioned();
} 

So now our AttributeParser becomes:

public class AttributeParser
{
    public static IEnumerable<T> Parse<T, T1>(Type type, string method)
        where T : IPagePublisher
        where T1 : IProvisioningAttribute
    {
        return type.GetMethod(method).GetCustomAttributes(typeof(T1), false)
            .Select(attribute => ((T1)attribute).Publisher(((T1)attribute).Page))
            .Cast<T>();
    }
} 

Which requires an change to our interface. This may change require a little bit of explanation for some. Why the Func? Bascially, we can have a property that accepts parameters and we can swap out its implementation as needed (for testing).

public interface IProvisioningAttribute
{
    Func<Page, IPagePublisher> Publisher { get; }
    Page Page { get; }
}

So the concrete implementation now becomes:

[Serializable, AttributeUsage(AttributeTargets.Method, Inherited = false, AllowMultiple = true)]
public class NewPageAttribute : Attribute, IProvisioningAttribute
{
    public Func<IPage, IPagePublisher> Publisher
    {
        get { return new PagePublisher(Page); }
    }

    public IPage Page { get; private set; }
    public NewPageAttribute(string name, string title, string pageLayout)
    {
        Page = new Page(name, title, pageLayout);
    }
} 

Wow, all we did was add IPagePublisher to AttributeParser.Parse<IPagePublisher, NewPageAttribute>(typeof(TestClass), "OneNewPage"); and we got all that code!

Unit tests: create a service for processing a publisher

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

So far, we are able to make a method with attributes for each page. When we parse these attributes, we get back the publishers with the page. Now we need to be able to process a publisher. We are going to move into mocking out a publisher to check that the publisher is called in the service. This service is a provisioner and it will be provisioning the personal site. So hopefully calling it the PersonalSiteProvisioner makes sense.

Let’s look at the code below. We are creating a new personal site provisioner and inside this we want to ensure that the page publisher actually adds a page. Note: we are deferring how the page is actually added. We just want to know that we are calling add. (Note: we using Moq as the isolation framework.)

using Moq;

[TestFixture]
public class SiteProvisioningTest
{
    [Test]
    public void CanAddPage()
    {
        var page = new Mock<IPagePublisher>();
        var provisioner = new PersonalSiteProvisioner(page.Object);

        provisioner.Process();

        page.Verify(x => x.Add());
    }
 }

So here’s the code to satisfy the test:

public interface IProvisioner
{
    void Process();
}

With the implementation:

namespace YourCompany.SharePoint.Domain.Services.Provisioning
{
    public class PersonalSiteProvisioner : IProvisioner
    {
        public List<IPagePublisher> PagePublishers { get; private set; }

          public PersonalSiteProvisioner(List<IPagePublisher> publishers)
          {
              PagePublishers = publishers;
          }
          public PersonalSiteProvisioner(IPagePublisher publisher) 
            : this(new List<IPagePublisher>{publisher})
          {
          }

        public void Process()
        {
            PagePublishers.TryForEach(x =>x.Add());
        }
    }
} 

Right, that all looks good. We can Process some publishers in our provisioner. Here’s some of the beauty (IMHO) that we can add to these tests without going near an integration point. Let’s add few more tests like adding multiple pages, adding and deleting, ensuring that there is error handling and then trying different combinations of adding and deleting and finally that if one adding errors that we can still process others in the list. Take a look at the tests (p.s the implementation in the end is easy once we had tests but it took an hour or so).

Unit tests: really writing the provisioner across most cases

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class
[TestFixture]
public class SiteProvisioningTest
{

    [Test]
    public void CanAddMultiplePages()
    {
        var page = new Mock<IPagePublisher>();
        var provisioner = new PersonalSiteProvisioner(new List<IPagePublisher> { page.Object, page.Object });

        provisioner.Process();

        page.Verify(x => x.Add(), Times.Exactly(2));
    }

    [Test]
    public void CanRemovePage()
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.IsPublished()).Returns(true);

        var provisioner = new PersonalSiteProvisioner(page.Object);

        provisioner.Process();

        page.Verify(x => x.Delete());
    }

    [Test]
    public void RemovingPageThatDoesntExistDegradesNicely()
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.IsPublished()).Returns(true);
        page.Setup(x => x.Delete()).Throws(new Exception());

        var provisioner = new PersonalSiteProvisioner(page.Object);

        provisioner.Process();

        page.Verify(x => x.Delete());
    }

    [Test]
    [Sequential]
    public void SecondItemIsProcessedWhenFirstItemThrowsException(
        [Values(false, true)] bool delete,
        [Values(true, false)] bool add)
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.IsPublished()).ReturnsInOrder(
            () => delete,
            () => add);
        page.Setup(x => x.Delete()).Callback(() => { throw new Exception(); });

        var provisioner = new PersonalSiteProvisioner(new List<IPagePublisher> { page.Object, page.Object, page.Object });
        provisioner.Process();

        page.Verify(x => x.Add(), Times.Once());
        page.Verify(x => x.Delete(), Times.Once());
    }

    [Test]
    public void AlreadyInstalledPagesWontReinstallCatchesException()
    {
        var page = new Mock<IPagePublisher>();
        page.Setup(x => x.Add()).Throws(new Exception());

        var provisioner = new PersonalSiteProvisioner(page.Object);
        provisioner.Process();

        page.Verify(x => x.Add(), Times.Once());
    }
}

and the implementation is below. Just a couple of notes: this class should have logging in it and that this should also be interact tested (this is the key point in the system that we need in the logs) and also there is a helper wrapper TryForEach which is a simple wrapper around the Linq ForEach that null checks. Believe it or not, the tests above actually drove out a lot of errors even in this small piece of code because it had to deal with list processing. We now don’t have to deal with these issues at integration (and particularly in production).

namespace YourCompany.SharePoint.Domain.Services.Provisioning
{
    public class PersonalSiteProvisioner : IProvisioner
    {
        public List<IPagePublisher> PagePublishers { get; private set; }

        public PersonalSiteProvisioner(List<IPagePublisher> publishers)
        {
            PagePublishers = publishers;
        }
        public PersonalSiteProvisioner(IPagePublisher publisher) 
          : this(new List<IPagePublisher>{publisher})
        {
        }

        public void Process()
        {
            PagePublishers.TryForEach(x =>
            {
                if (x.IsPublished()) {
                    x.Delete();
                } else {
                    x.Add();
                }
            });
        }
    }
} 

Now we are ready to do something with SharePoint.

Integration tests: publish pages in the context of SharePoint

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

Now we have have to deal with SharePoint. This means that we are going to an integration test. In terms of the classes, are now ready to implement the PagePublisher. Let’s write a test. The basic design is that we can we will hand in the publishing web context (and page) and then add it. (Note: we could have handed in only the web context in the constructor and then the page dependency in the method – this looks better in hindsight). The code then asserts that the item exists. Now, those of you familiar with the SharePoint API know that neither the Site.OpenPublishingWeb or SiteExpectations.ExpectListItem calls exist. These are wrappers to make the code more readable. We’ll include those below.

What you need to be seeing is that there no references to SharePoint in the Domain code – which in the flip-side is that the Infrastructure code is the place where these live.

using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Infrastructure;
using Microsoft.SharePoint.Publishing;
using NUnit.Framework;
using Test.Unit;

namespace Test.Integration.Provisioning
{
    [TestFixture]
    public class AddPagePublisherTest
    {
        [Test]
        public void CanAddPage()
        {
            var page = new Page("Home.aspx", "Home Page", "HomePage.aspx");   
            Site.OpenWeb("http://mysites/", x => new PagePublisher(x.Web, page).Add());
            SiteExpectations.ExpectListItem("http://mysites/", x => x.Name == "Home.aspx");
        }
    }
}

Additional wrapper code – one in Infrastructure and the other in the Test.Integration project – these both hide the repetitive complexity of SharePoint.

namespace YourCompany.SharePoint.Infrastructure
{
    public static class Site
    {
        public static void OpenWeb(string url, Action<SPWeb> action)
        {
            using (var site = new SPSite(url))
            using (var web = site.OpenWeb())
            {
                action(web);
            }
        }
        public static void OpenPublishingWeb(string url, Action<PublishingWeb> action)
        {
            using (var site = new SPSite(url))
            using (var web = site.OpenWeb())
            {
                action(PublishingWeb.GetPublishingWeb(web));
            }
        }
    }
}

And the test assertion:

namespace Test.Integration
{
   public class SiteExpectations
   {
       public static void ExpectListItem(string url, Func<SPListItem, bool> action)
       {
           Site.OpenPublishingWeb(url, x => Assert.IsTrue(x.PagesList.Items.Cast<SPListItem>().Where(action).Any()));
       }
   }
}

So now let’s look at (some of) the code to implement Add. Sorry, it’s a bit long but should be easy to read. Our page publisher code doesn’t look a lot different from the original code we found in the blog Customising SharePoint 2010 MySites with Publishing Sites in in the SetupMySite code. It is perhaps a little clearer because there are a few extra private methods that help describe the process that SharePoint requires when creating a new page. But the other code would have got there eventually.

The key difference is how we get there. The correctness of this code was confirmed by the integration test. In fact, we had to run this code tens (or perhaps one hundred) times to iron the idiosyncrasies of the SharePoint API - particularly the case around DoUnsafeUpdates. Yet, we didn’t have to deploy the entire solution each time. And no way do we have to debug by attaching a process. There were some times we were at a loss and did resort to the debugger but we were able to get context in Debug in the test runner. All of this has lead to increases in speed and stability.

There’s a final win in design. This code has one responsibility: to add a page. No more, no less. When we come to add multiple pages we don’t change this code. If we come to add another type of page – perhaps we don’t change this code either but create another type of publisher. Later on we can work out whether these publishers have shared, base code. A decision we can defer until refactoring.

using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;

namespace YourCompany.SharePoint.Infrastructure.Provisioning
{
    public class PagePublisher : IPagePublisher
    {
        public const string SiteCreated = "MySiteCreated";

        private readonly Page _page;
        private readonly SPWeb _context;
        private PublishingWeb _pubWeb;

        public PagePublisher(SPWeb web, IPage publishedPage)
        {
            _page = publishedPage;
            _context = web;
        }

        public void Add()
        {
           using (var web = _site_context.OpenWeb()) {
             _pubWeb = web;
             if (_page.Equals(null) && !IsProvisioned()) return;

             DisableVersioning();

             if (!HomePageExists)
                 CreatePage();

             AddAsDefault();
             Commit();

             if (!IsProvisioned())
                 DoUnsafeUpdates(x =>
                 {
                     x.Properties.Add(SiteCreated, "true");
                     x.Properties.Update();
                 });
           }
        }

        public bool IsProvisioned()
        {
            return _context.Properties.ContainsKey(SiteCreated);
        }

        private void AddAsDefault()
        {
            _pubWeb.SetDefaultPage(HomePage.ListItem.File);
        }

        private void Commit()
        {
            _pubWeb.Update();
        }

        private bool HomePageExists
        {
            get { return _pubWeb.HasPublishingPage(_page.Name);  }
        }

        private void DoUnsafeUpdates(Action<SPWeb> action)
        {
            var currentState = _context.AllowUnsafeUpdates;
            _context.AllowUnsafeUpdates = true;
            action(_context);
            _context.AllowUnsafeUpdates = currentState;
        }

        private void DisableVersioning()
        {
            var pages = _pubWeb.PagesList;
            pages.EnableVersioning = false;
            pages.ForceCheckout = false;
            pages.Update();
        }

        private void CreatePage()
        {
            var layout = _pubWeb.GetAvailablePageLayouts()
                .Where(p => p.Name == _page.PageLayout)
                .SingleOrDefault();

            var page = _pubWeb.GetPublishingPages().Add(_page.Name, layout);
            page.Title = _page.Title;
            page.Update();
        }
    }
}

You should note that our tests against SharePoint are actually small and neat. In this case, it tests with one dependency (SharePoint) and one interaction (Add). The tests should be that simple. Setup and teardown are where it gets a little harder. Below requires a Setup which swaps out the original page so that the new one can be added and that we know that it is different. Teardown cleans up the temp publishing page. Note: at this stage the code in the Setup has not been abstracted into its own helper – we could do this later.

namespace Test.Integration.Provisioning
{
    [TestFixture]
    public class AddPagePublisherTest
    {
        private Page _publishedPage;
        private const readonly string HomeAspx = "http://mysites/";
        readonly string _homePage = a.TestUser.HomePage;
        private PublishingPage _tempPublishingPage;

        [SetUp]
        public void SetUp()
        {
            _publishedPage = new Page("Home.aspx", "Home Page", "HomePage.aspx");

            Site.OpenPublishingWeb(_homePage, x =>
            {
                var homePageLayout = x.GetPublishingPageLayout(_publishedPage.PageLayout);
                _tempPublishingPage = x.AddPublishingPage("TempPage.aspx", homePageLayout);
                x.SetDefaultPage(_tempPublishingPage.ListItem.File);
                x.Update();
                x.DeletePublishingPage(_publishedPage.Name);
            });
        }

        [Test]
        public void CanAddPage()
        {
            Site.OpenWeb(_homePage, x => new PagePublisher(x.Web, _publishedPage).Add());
            SiteExpectations.ExpectListItem(_homePage, x => x.Name == HomeAspx);
        }

        [Teardown]
        public void Teardown()
        {
            Site.OpenWeb(_homePage, x => x.DeletePublishingPage(_tempPublishingPage.Name));
        }
    }
} 

System: check that it all works in SharePoint when deployed

TODO

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

The final step in this cadence to create system tests by completing the acceptance tests. Theoretically, this step should not be needed because the code talks to SharePoint. In practice, this step finds problems and cleans up code at the same time. Let’s return to the test that we originally wrote and have been delivering to – but not coding against. We are now going to implement system test using test-last development. Here is the story:

Story is Solution Deployment

In order to create new pages for user
As a user
I want a MySites available

With scenario have a new feature
  Given I have a new wsp package mysites.wsp
  When site is deployed
    And I am on site http://mysites/personal/45678
  Then Publishing Site Feature F6924D36-2FA8-4f0b-B16D-06B7250180FA is site activated

You may notice in the code that above, we have actually added some more information from the original story. We now know that the personal site has an Id in it (http://mysites/personal/45678) and we also know the GUID of the site (F6924D36-2FA8-4f0b-B16D-06B7250180FA). Adding, changing and morphing system tests often happens, particularly as we learn more about our implementation – in our case, we have to run the analysts/product owner through these changes!

Now we need to find a library to provide an implementation. There are number of libraries: Cucumber, nBehave or StoryQ. We have chosen StoryQ. StoryQ has a GUI converter that can take the text input above and turn it into C# skeleton code as per below and then we fill out the code (Cucumber keeps the story separate from implementation). StoryQ will then output the results of the tests in the test runner.

  using StoryQ;
  using NUnit.Framework;

  namespace Test.System.Acceptance.Provisioning
  {
      [TestFixture]
      public class SolutionDeploymentTest
      {
          [Test]
          public void SolutionDeployment()
          {
              new Story("Solution Deployment")
                  .InOrderTo("create new pages for users")
                  .AsA("user")
                  .IWant("a MySites available")

                  .WithScenario("have a new feature")
                    .Given(IHaveANewWspPackage_, "mysites.wsp")
                    .When(SiteIsDeployed)
                      .And(IAmOnSite_, "http://mysites/personal/684945")
                    .Then(__IsSiteActivated, "Publishing Site Feature", "F6924D36-2FA8-4f0b-B16D-06B7250180FA")
                 
                  .Execute();
          }

          private void IHaveANewWspPackage_(string wsp)
          {
              throw new NotImplementedException();
          }

          private void SiteIsDeployed()
          {
              throw new NotImplementedException();
          }

          private void IAmOnSite_(string site)
          {
              throw new NotImplementedException();
          }    

          private void __IsSiteActivated(string title, string guid)
          {
              throw new NotImplementedException();
          }
      }
  }

Our implementation of the system is surprisingly simple. Because we are provisionin new features, the main test is that the deployment sequence has worked. So we are going to make two checks (once the code is deployed), is the solution deployed? is the feature activated? To do this we are going to need to write a couple of abstractions around the SharePoint API: Solution.IsDeployed & Feature.IsSiteActivated. We do this because we want the system tests to remain extremely clean.

  using StoryQ;
  using NUnit.Framework;

  namespace Test.System.Acceptance.Provisioning
  {
      [TestFixture]
      public class SolutionDeploymentTest
      {
          [Test]
          public void SolutionDeployment()
          {
              new Story("Solution Deployment")
                  .InOrderTo("create new pages for users")
                  .AsA("user")
                  .IWant("a MySites available")

                  .WithScenario("have a new feature")
                    .Given(IHaveANewWspPackage_, "mysites.wsp")
                    .When(SiteIsDeployed)
                      .And(IAmOnSite_, "http://mysites/personal/684945")
                    .Then(__IsSiteActivated, "Publishing Site Feature", "F6924D36-2FA8-4f0b-B16D-06B7250180FA")
                 
                  .Execute();
          }

          private string _site;
          private string _wsp;

          private void IHaveANewWspPackage_(string wsp)
          {
              _wsp = wsp;
          }

          private void SiteIsDeployed()
          {
              Assert.IsTrue(Solution.IsDeployed(_wsp));
          }

          private void IAmOnSite_(string site)
          {
              _site = site;
          }    

          private void __IsSiteActivated(string title, string guid)
          {
              Assert.IsTrue(Feature.IsSiteActivated(_httpMysitesPersonal, guid));
          }
      }
  }

Here are the wrappers around the SharePoint API that we are going to get reuse. Sometimes we wrap the current API SPFarm:

public static class Solution
{
    public static bool IsDeployed(string wsp)
    {
        return SPFarm.Local.Solutions
            .Where(x => x.Name == wsp)
            .Any();
    }
} 

Sometimes we are wrapping our on wrappers Site.Open.

public static class Feature
{
    public static bool IsSiteActivated(string webUrl, string feature)
    {
        return Site.Open(webUrl, site => 
             site.Features
                .Where(x =>
                    x.Definition.Id == new Guid(feature) &&
                    x.Definition.Status == SPObjectStatus.Online).Any()
             );
    }
} 

Starting to wrap up the cadence for adding a feature

Below is the TODO list that we started with as we layered our code with a test automation pyramid straegy and are now up to adding new publishers

  • have an attribute
  • be able to read the attributes from a class
  • return a publisher for a page rather than a page
  • create a service for processing a publisher
  • actually publish pages in the context of SharePoint (integration)
  • check that it all works in SharePoint when deployed (system)
  • add new types of publishers into the factory class

So far we have only implemented NewPage and still have ActivateFeature, RemovePage and MasterPage to go:

public class SiteFeatureEventReceiver : SPFeatureReceiver
{
    [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
    [ActivateFeature(PackageCfg.PublishingWebFeature)]
    [RemovePage("Pages/default.aspx")]
    [MasterPage("CustomV4.master", MasterPage.MasterPageType.User)]
    [MasterPage("CustomMySite.master", MasterPage.MasterPageType.Host)]
    public override void FeatureActivated(SPFeatureReceiverProperties properties)
    {
        PersonalSiteProvisioner
          .Create(properties.Feature.Parent as SPWeb)
          .Process();
    }
}

To bring in the new pages, it would be a mistake to start implementing the attributes. Instead, we need to write a system test. This may merely be an extension of the current system test. Then we need to proceed through unit and integration tests. What we’ll find is that as we add a new page we are likely to find that we are going to need to add a new concept of a factory of some sort that will return our publishers for each provisioning attribute.

So let me finish with the new acceptance test for [ActivateFeature(PackageCfg.PublishingWebFeature)] and a TODO list.

Story is Solution Deployment

In order to create new pages for user
As a user
I want a MySites available

With scenario have a new feature
  Given I have a new wsp package mysites.wsp
  When site is deployed
    And I am on site http://mysites/personal/45678
  Then Publishing Site Feature F6924D36-2FA8-4f0b-B16D-06B7250180FA is site activated
    And Publishing Feature is web activated

And the TODO now becomes:

TODO

  • have a new attribute ActivateFeature (unit)
  • be able to read the attribute (unit)
  • return a publisher for a page rather than a page ActivateFeaturePublisher? (unit)
  • extend service for processing a publisher
  • actually activate feature in the context of SharePoint (integration)
  • complete acceptance test (system)

Convinced?

This is a lot of work. Layering tends to do this. But it provides the basis for scaling code base. Quite quickly we have found that there are patterns for reuse, we pick up SharePoint API problems in integration tests rather than post-deployment in system tests, that we can move around the code base easily-ish, we can refactor because code is under test. All of this provides us with the possibility for scale benefits in terms of speed and quality. A system that is easy to code tends not to provide these scale benefits.

Categories: Uncategorized Tags: , ,

Test Strategy in SharePoint: Part 3 – Event Receiver as procedural, untestable feature

December 5th, 2010 No comments

Here I cover procedural, untestable code and to do so will cover three pieces of SharePoint code. The first is sample code that suggest just how easy it is to write SharePoint code. The second is production quality I code written in the style of the first and yet it becomes unmaintainable. The third piece of code is what we refactored the second piece to become and we think it’s maintainable. But it will come at cost (and benefit) that I explore in “Test Strategy in SharePoint: Part 4 – Event Receiver as layered Feature”.

Sample One: sample code is easy code

Here’s a current sample that we will find on the web that we find is actually the code that makes its way into production code rather than remaining sample code – so no offence to the author. Sharemuch writes:

When building publishing site using SharePoint 2010 it’s quite common to have few web parts that will make it to every page (or close to every page) on the site. An example could be a custom secondary navigation which you may choose to make a web part to allow some user configuration. This means you need to provision such web part on each and every page that requires it – right? Well, there is another solution. What you can do is to define your web part in a page layout module just like you would in a page. In MOSS this trick would ensure your web part will make it to every page that inherits your custom layout; not so in SharePoint 2010. One solution to that is to define the web part in the page layout, and programmatically copy web parts from page layout to pages inheriting them. In my case I will demonstrate how to achieve this by a feature receiver inside a feature that will be activate in site template during site creation. This way every time the site is created and pages are provisioned – my feature receiver will copy web parts from page layout to those newly created pages.

	public override void FeatureActivated(SPFeatureReceiverProperties properties)
	{
	    SPWeb web = properties.Feature.Parent as SPWeb;

	    if (null != web)
	    {
	        PublishingWeb pubWeb = PublishingWeb.GetPublishingWeb(web);
	        SPList pages = pubWeb.PagesList;

	        foreach (SPListItem page in pages.Items)
	        {
	            PublishingPage pubPage = PublishingPage.GetPublishingPage(page);
	            pubPage.CheckOut();
	            CopyWebParts(pubPage.Url, web, pubPage.Layout.ServerRelativeUrl, pubPage.Layout.ListItem.Web);
	            pubPage.CheckIn("Webparts copied from page layout");
	        }
	    }
	}

	private void CopyWebParts(string pageUrl, SPWeb pageWeb, string pageLayoutUrl, SPWeb pageLayoutWeb)
	{
	    SPWeb web = null;
	    SPWeb web2 = null;
	    SPpageWebPartManager pageWebPartManager = pageWeb.GetpageWebPartManager(pageUrl, PersonalizationScope.Shared);
	    SPpageWebPartManager pageLayoutWebPartManager = pageLayoutWeb.GetpageWebPartManager(pageLayoutUrl, PersonalizationScope.Shared);
	    web2 = pageWebPartManager.Web;
	    web = pageLayoutWebPartManager.Web;
	    SPLimitedWebPartCollection webParts = pageLayoutWebPartManager.WebParts;
	    SPLimitedWebPartCollection parts2 = pageWebPartManager.WebParts;
	    foreach (System.Web.UI.WebControls.WebParts.WebPart part in webParts)
	    {
	        if (!part.IsClosed)
	        {
	            System.Web.UI.WebControls.WebParts.WebPart webPart = parts2[part.ID];
	            if (webPart == null)
	            {
	                string zoneID = pageLayoutWebPartManager.GetZoneID(part);
	                pageWebPartManager.AddWebPart(part, zoneID, part.ZoneIndex);
	            }
	        }
	    }
	}

This sample code sets a tone that this is maintainable code. For example, there is some abstraction with the CopyWebParts method remaining separate from the activation code. Yet, if I put it against the four elements of simple design, the private method maximises clarity but won’t get past passing tests.

Let’s take a look at some production quality code that I have encountered then refactored to make it maintainable code.

Sample Two: easy code goes production

All things dev puts up the sample for Customising SharePoint 2010 MySites with Publishing Sites. Here is the code below that follows the same patterns: clarity is created through class scope refactoring of private methods. But we still still see magic string constants, local error handling, procedural style coding against the SharePoint API (in SetupMySite(). The result is code that is easy to write, easy to deploy, manual to test and reuse is through block-copy inheritance (ie copy and paste).

using System;
using System.Runtime.InteropServices;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;
using System.Linq;

namespace YourCompany.SharePoint.Sites.MySites
{

   [Guid("cd93e644-553f-4486-91ad-86e428c89723")]
   public class MySitesProvisionerReceiver : SPFeatureReceiver
   {

         private const string MySiteCreated = "MySiteCreated";
         private const string ResourceFile = "MySiteResources";
         private const uint _lang = 1033;

         public override void FeatureActivated(SPFeatureReceiverProperties properties)
         {

		         using (SPWeb web = properties.Feature.Parent as SPWeb)
		         {
		            //run only if MySite hasn't been created yet as feature could be run after provisioning as well
		            if (web.Properties.ContainsKey(MySiteCreated))
		              return;

		            ActivatePublishingFeature(web);
		            SetUpMySite(web);
		         }
   			 }

			   private void ActivatePublishingFeature(SPWeb web)
			   {

			         //Activate Publishing Web feature as the stapler seems to not do this consistently
			         try
			         {
			             web.Features.Add(new Guid("94C94CA6-B32F-4da9-A9E3-1F3D343D7ECB"));
			         }
			         catch (Exception)
			         {
			             //already activated
			          }

			   }

			   private void SetUpMySite(SPWeb web)
			   {

			         //turn off versioning, optional but keeps it easier for users as they are the only users of their MySite home page 
			         var pubWeb = PublishingWeb.GetPublishingWeb(web);
			         var pages = pubWeb.PagesList;
			         pages.EnableVersioning = false;
			         pages.ForceCheckout = false;
			         pages.Update();

			         //set custom masterpage
			         var customMasterPageUrl = string.Format("{0}/_catalogs/masterpage/CustomV4.master", web.ServerRelativeUrl);
			         web.CustomMasterUrl = customMasterPageUrl;
			         web.MasterUrl = customMasterPageUrl;

			         var layout = pubWeb.GetAvailablePageLayouts().Cast<PageLayout>()
			                                                  .Where(p => p.Name == "HomePage.aspx")
			                                                  .SingleOrDefault();

			         //set default page
			         var homePage = pubWeb.GetPublishingPages().Add("Home.aspx", layout);
			         homePage.Title = "Home Page";
			         homePage.Update();
			         pubWeb.DefaultPage = homePage.ListItem.File;

			         //Add initial webparts
			         WebPartHelper.WebPartManager(web,
			         homePage.ListItem.File.ServerRelativeUrl,
			         Resources.Get(ResourceFile, "MySiteSettingsListName", _lang),
			         Resources.Get(ResourceFile, "InitialWebPartsFileName", _lang));

			         web.AllowUnsafeUpdates = true;
			         web.Properties.Add(MySiteCreated, "true");
			         web.Properties.Update();
			         pubWeb.Update();

			         //set the search centre url
			         web.AllProperties["SRCH_ENH_FTR_URL"] = Resources.Get(ResourceFile, "SearchCentreUrl", _lang);
			         web.Update();

			         //delete default page
			         var defaultPageFile = web.GetFile("Pages/default.aspx");
			         defaultPageFile.Delete();
			         web.AllowUnsafeUpdates = false;

			   }
     }
}	
	

There is for me one more key issue. What does it really do? I was struck by the unreadability of this code and was concerned that there are so many working parts here and how they would all be combined.

Sample Three: wouldn’t this be nice?

Here’s what we refactored that code to. Hopefully there is some more intention in this. You may read it like this: I have a Personal Site that I create and process with a new page, removing and exsiting page, being activated and then getting a couple of master pages.

I like this because I can immediately ask simple questions, why do I have to remove an existing page and why are there two master pages. It’s SharePoint and there’s of course good reasons. But I now am abstracting away what SharePoint has to do for me to get this feature activated. It’s not perfect but is a good enough example to work on.

using System.Runtime.InteropServices;
using YourCompany.SharePoint.Domain.Model.Provisioning;
using YourCompany.SharePoint.Infrastructure;
using YourCompany.SharePoint.Infrastructure.Configuration;
using Microsoft.Office.Server.UserProfiles;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Publishing;

namespace YourCompany.SharePoint.MySites.Features.WebFeatureProvisioner
{
    [Guid("cb9c03cd-6349-4a1c-8872-1b5032932a04")]
    public class SiteFeatureEventReceiver : SPFeatureReceiver
    {
        [NewPage("Home.aspx", "Home Page", "HomePage.aspx")]
        [ActivateFeature(PackageCfg.PublishingWebFeature)]
        [RemovePage("Pages/default.aspx")]
        [MasterPage("CustomV4.master", MasterPage.MasterPageType.User)]
        [MasterPage("CustomMySite.master", MasterPage.MasterPageType.Host)]
        public override void FeatureActivated(SPFeatureReceiverProperties properties)
        {
            PersonalSiteProvisionerFactory
              .Create(properties.Feature.Parent as SPWeb)
              .Process();
        }
    }
}	

What I want to suggest is that this code is not necessarily easy to write given the previous solution. We are going to have to bake our own classes around the code: there’s the factory class and the attributes too – we’ll also find that if there are the classes that the factory returns too. In the end, we are should have testable code and my hunch is that we are likely to get some reuse too.

The next entry, “Test Strategy in SharePoint: Part 4 – Event Receiver as layered Feature”, will look at how we can layer the code to make this code become a reality.

Categories: Uncategorized Tags: , ,

Test Strategy in SharePoint: Part 2 – good layering to aid testability

November 28th, 2010 No comments

Test Strategy in SharePoint: Part 2 – good layering to aid testability

Overall goal: write maintainable code over code that is easy to write (Freeman and Price, 2010)

In Part 1 – testing poor layering is not good TDD I have argued that we need to find better ways to think about testing SharePoint wiring code that do not confuse unit and integration tests. In this post, I explain that outline a layering strategy for solutions resolving this problem. Rather than only one project for code and one for tests, I use 3 projects for tests and 4 for the code – this strategy is based on DDD layering and the test automation pyramid.

  • DDD layering projects: Domain, Infrastructure, Application and UI
  • Test projects: System, Integration and Unit

Note: this entry does not give code samples – the next post will – but focuses how the projects are organised within the Visual Studio solution, how they are sequenced when programming. I’ve included a task sheet that we use in Sprint Planning that we use a boiler plate list to mix and match scope of features. Finally, I have a general rave on the need for disciplined test-first and test-last development.

Projects

Here’s the quick overview of the layers, take a look further down for fuller overview.

  • Domain: the project which has representations of the application domain and have no references to other libraries (particularly SharePoint)
  • *Infrastructure: *this project references Domain and has the technology specific implementations. In this case, it has all the SharePoint API implementations
  • Application: this project is a very light orchestration layer. It is a way to get logic out of the UI layer to make it testable. Currently, we actually put all our javascript jQuery widgets in the project (that I will post about later because we unit (BDD-style) all our javascript and thus need to keep it away from the UI)
  • UI: this is the wiring code for SharePoint but has little else – this will more sense once you can see that we Integration test all SharePoint API and this code goes to Infrastructure and that we unit test any models, services or validation and that we put these in Domain. For example, with Event Receivers code in methods is rarely longer than a line or two long.

Test Projects

  • System Acceptance Test: Business focused test that describes the system – these tests should live long term reasonably unchanged
  • System Smoke Test: Tests that can run in any environment that confirm that it is up and running
  • Integration Test: Tests that have 1 dependency and 1 interaction that are usually against third-party API and in this case mainly the SharePoint API - these may create scenarios on each method
  • Unit Test: Tests that have no dependencies (or are mocked out) – model tests, validations, service tests, exception handling

Solution Structure

Below is the source folder of code in the source repository (ie not lib/, scripts/, tools/). The solution file (.sln) lives in the src/ folder.

Taking a look below, we see our 4 layers with 3 tests projects. In this sample layout, I have include folders which suggest that we have code around the provisioning and configuration of the site for deployment – see here for description of our installation strategy. These functional areas exists across multiple projects: they have definitions in the Domain, implementation in the Infrastructure and both unit and integration tests.

I have also included Logging because central to any productivity gains in SharePoint is to use logging and avoid using a debugger. We now rarely attach a debugger for development. And if we do it is not our first tactic as was the previous case.

You may also notice Migrations/ in Infrastructure. These are the migrations that we use with migratordotnet.

Finally, the UI layer should look familiar and this is a subset of folders.

src/
  Application/

  Domain/
    Model/
	    Provisioning/
	    Configuration/
		Services/
	    Provisioning/
	    Configuration/
    Logging/
  
  Infrastructure/
    Provisioning/
    Configuration/
    Logging/    

  Tests.System/
    Acceptance/
	    Provisioning/
	    Configuration/
    Smoke/
  
  Tests.Integration/
    Fixtures/
    Provisioning/
    Configuration/
    Migrations/

  Tests.Unit/
    Fixtures/
    Model/
    Provisioning/
    Configuration/
    Logging/
    Services/
    Site/    
    
  Ui/
    Features/
    Layouts/
    Package/
    PageLayouts/
    Stapler/
    ...

Writing code in our layers in practice

The cadence of the developers work is also based on this separation. It generally looks like this:

  1. write acceptance tests (eg given/when/then)
  2. begin coding with tests
  3. sometimes starting with Unit tests – eg new Features, or jQuery widgets
  4. in practice, because it is SharePoint, move into integration tests to isolate the API task
  5. complete the acceptance tests
  6. write documentation of SharePoint process via screen shots

We also have a task sheet for estimation (for sprint planning) that is based around this cadence.

Task Estimation for story in Scrum around SharePoint feature

A note on test stability

Before I finish this post and start showing some code, I just want to point out that getting stable deployments and stable tests requires discipline. The key issues to allow for are the usual suspects:

  • start scripted deployment as early as possible
  • deploy with scripts as often as possible, if not all the time
  • try to never deploy or configure through the GUI
  • if you are going to require a migration (GUI-based configuration) script it early because while it is faster to do through the GUI this is a developer-level (local) optimisation for efficiency and won’t help with stabilisation in the medium term
  • unit tests are easy to keep stable – if they aren’t then you are serious in trouble
  • integration tests are likely to be hard to keep stable – ensure that you have the correct setup/teardown lifecycle and that you can fairly assume that the system is clean
  • as per any test, make sure integration tests are not dependent on other tests (this is standard stuff)
  • system smoke tests should run immediately after an installation and should be able to be run in any environment at any time
  • system smoke tests should not be destruction precisely because they are run in any environment including production to check that everything is working
  • system smoke tests shouldn’t manage setup/teardown because they are non-destructive
  • system smoke tests should be fast to run and fail
  • get all these tests running on the build server asap

Test-first and test-last development

TDD does not need to be exclusively test-first development. I want to suggest that different layer require different strategies but most importantly there is a consistency to the strategy to help establish cadence. This cadence is going to reduce transaction costs – knowing when done, quality assurance for coverage, moving code out of development. Above I outlined writing code in practice: acceptance test writing, unit, integration and then acceptance test completion.

To do this I test-last acceptance tests. This means that as developers we write BDD style user story (give/when/then) acceptance tests. While this is written first, it rarely is test-driven because we might not then actually implement the story directly (although sometimes we do). Rather we park it. Then we move into the implementation which is encompassed by the user story but we then move into classical unit test assertion mode in unit and integration tests. Now, there is a piece of code that it clearly unit testable (models, validation, services) this is completed test first – and we pair it, we use Resharper support to code outside-in. We may also need to create data access code (ie SharePoint code) and this is created with integration tests. Interestingly, because it is SharePoint we break many rules. I don’t want devs to write Infrastructure code test last but often we need to spike the API. So, we actually spike the code in the Integration test and then refactor to the Infrastructure as quickly as possible. I think that this approach is slow and that we would be best to go to test-first but at this stage we are still getting a handle on good Infrastructure code to wrap the SharePoint API. The main point is that we don’t have untested code in Infrastructure (or Infrastructure code lurking in the UI). These integration tests in my view are test-last in most cases simply because we aren’t driving design from the tests.

At this stage, we have unfinished system acceptance tests, code in the domain and infrastructure (all tested). What we then do is hook the acceptance test code up. We do this instead of hooking up the UI because then we don’t kid ourselves whether or not the correct abstraction has been created. In hooking up the acceptance tests, we can simply hook up in the UI. However, the reverse has often not been the case. Nonetheless, the most important issue that we have hooked up our Domain/Infrastructure code by two clients (acceptance and UI) and this tends to prove that we have a maintainable level of abstraction for the current functionality/complexity. This approach is akin to when you have a problem and you go to multiple people to talk about it. By the time you have had multiple perspectives, you tend to get clarity about the issues. Similarly, in allowing our code to have multiple conversations in the form of client libraries consume them, we know the sorts of issues are code are going have – and hopefully, because it is software, we have refactored the big ones out (ie we can live the level of cohesion and coupling for now).

I suspect for framework or even line of business applications, and SharePoint being one of many, we should live with the test-first and test-last tension. Test-first is a deep conversation that in my view covers off so many of the issues. However, like life, these conversations are not always the best to be had every time. But for the important issues, they will always need to be had and I prefer to have them early and often.

None of this means that individual developers get to choose which parts get test-first and test-last. It requires discipline to use the same sequencing for each feature. This takes time for developers to learn and leadership to encourage (actually, enforce, review and refine). I am finding that team members can learn the rules of the particular code base in between 4-8 weeks if that is any help.

Test Strategy in SharePoint: Part 1 – testing poor layering is not good TDD

November 27th, 2010 2 comments

Test Strategy in SharePoint: Part 1 – testing poor layering is not good TDD

Overall goal: write maintainable code over code that is easy to write (Freeman and Price, 2010)

My review of SharePoint and TDD indicated that we need to use layering techniques to isolate SharePoint and not to confuse unit tests with integration tests. Let’s now dive into what is the nature of the code we write in SharePoint.

This post of one of two posts. This first one will review two existing code samples that demonstrate testing practices: SharePoint Magic 8 Ball & Unit Testing SharePoint Foundation with Microsoft Pex and Moles: Tutorial for Writing Isolated Unit Tests for SharePoint Foundation Applications. I argue against using these practices because this type of mocking encourages poor code design through exclusively using unit tests and not isolating approach integration tests. In the second post, I will cover examples that demonstrate layering and separation of unit and integration tests, including how to structure Visual Studio solutions.

What is the nature of SharePoint coding practice?

At this stage, I am not looking at “customisation” code (ie WebParts) but rather “extension” code practices where we effectively configure up the new site.

  1. programming code is often “wiring” deployment code (eg files mostly of xml) that then needs to be glued together via .net code against the SharePoint API.
  2. resulting code is problematically procedural in nature and untestable outside its deployment environment
  3. this makes its error prone and slow
  4. it also encourages developers to work in large batches, across longer than needed time frames and provide slow feedback
  5. problems tend to occur in later environments

A good summary how the nature of a SharePoint solution may not lend itself to classical TDD:

So, now lets take a typical SharePoint project. Of course there is a range and gamut of SharePoint projects, but lets pick the average summation of them all. Thus in a typical SharePoint project, the portion where TDD is actually applicable is very small – which is the writing code part. In most, not all, SharePoint projects, we write code as small bandaids across the system, to cover that last mile – we don’t write code to build the entire platform, in fact the emphasis is to write as little code as possible, while meeting the requirements. So already, the applicability of TDD as a total percentage of the project is much smaller. Now, lets look at the code we write for SharePoint. These small bandaids that can be independent of each other, are comprised of some C#/VB.NET code, but a large portion of the code is XML files. These large portion of XML files, especially the most complex parts, define the UI – something TDD is not good at testing anyway. Yes I know attempts have been made, but attempt != standard. And the parts that are even pure C#, we deal with an API which does not lend itself well to TDD. You can TDD SharePoint code, but it’s just much harder.

What we found when stuck to the procedural, wiring up techniques is:

  1. write, repetitive long blocks of code
  2. funky behaviours in SharePoint
  3. long periods of time sitting around watching the progress bar in the browser
  4. disagreement and confusion between (knowledgable) developers on what they should be doing and what was happening
  5. block copy inheritance (cut-and-paste) within and between visual studio solution
  6. no automated testing

Looking at that problem we have formulated the charitable position that standard SharePoint techniques allow you to write easy code – or more correctly, allow you to copy and paste others code and hack it to your needs. We decided that we would try to write maintainable code. This type of code is layered and SOLID

Okay so where are the code samples for us to follow?

There are two main samples to look at and both try and deal with the SharePoint API. Let me cover these off first before I show where we went.

SharePoint Magic 8 Ball

SharePoint Magic 8 Ball is some sample source code from Best Practices SharePoint conference in 2009. This a nice piece of code to demonstrate both how to abstract away SharePoint and how not to worry about unit testing SharePoint.

Let me try and explain. The functionality of the code is that you ask a question of a “magic” ball and get a response – the response is actually just picking an “answer” randomly from a list. The design is a simple one in two parts: (1) the list provider for all the answers and (2) the picker (aka the Ball). The list is retrieved from SharePoint and the picker takes a list from somewhere.

The PickerTest (aka the ball)

 [TestFixture]
 public class BallPickerTest
 {
     [Test]
     public void ShouldReturnAnAnswerWhenAskedQuestion()
     {
         var ball = new Ball(); 
         answer = ball.AskQuestion("Will it work?");

         Assert.IsNotNull(answer);
     }
} 

And the Ball class

public class Ball
{
    public List<string> Answers { get; set }
    
    public Ball()
    {
        Answers = new List<string>{"yes"};
    }

    public string AskQuestion(string p)
    {
        Random random = new Random();
        int item = random.Next(Answers.Count);
        return Answers[item];
    }
}

That’s straightforward. So then the SharePoint class gets the persisted list.

public class SharePoint
{
    public List<string> GetAnswersFromList(string ListName)
    {
        List<String> answers = new List<string>();
        SPList list = SPContext.Current.Web.Lists[ListName];
        foreach (SPListItem item in list.Items)
        {
            answers.Add(item.Title);
        }
        return answers;
    }
}

Here’s the test that shows how you can mock out SharePoint. In this case, the tester is using TypeMock. Personally, what I am going to argue is that isn’t an appropriate test. You can do it, but I wouldn’t bother. I’ll come back to how I would rather write an integration test.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NUnit.Framework;
using TypeMock.ArrangeActAssert;
using Microsoft.SharePoint;

namespace BPC.Magic8Ball.Test
{
    [TestFixture]
    public class SharePointTest
    {
        [Test]
        public void ShouldReturnListOfAnswersFromSharePointList()
        {
              SPList fakeList = Isolate.Fake.Instance<SPList>();
              Isolate.WhenCalled(() => SPContext.Current.Web.Lists["ListName"]).WillReturn(fakeList);
              SPListItem fakeAnswerOne = Isolate.Fake.Instance<SPListItem>();
              Isolate.WhenCalled(() => fakeAnswerOne.Title).WillReturn("AnswerZero");
              Isolate.WhenCalled(() => fakeList.Items).WillReturnCollectionValuesOf(new List<SPItem>{fakeAnswerOne });                              
  
              var answers = new SharePoint().GetAnswersFromList("ListName");

              Assert.AreEqual("AnswerOne", answers.First());
        }
    }
}

In the code above, it nicely demonstrates that TypeMock can mock out a static class that lives somewhere else. Put differently, you don’t have use dependency injection patterns to do your mocking. I want to argue that this is a poor example because this is not a practical unit test – and I’m sure that if I wasn’t lazy I could find the test smell here already documented in xUnit Test Patterns by Gerald Mezaros. Least the naming of the classes!

The key problem is that sample project is that the real test here is that there is good decoupling between the Picker (the ball) and the answers (sharepoint list). If there is any mocking to go on, it should be that the answers (sharepoint list) is actually invoked. If this the case, then this also points to a potential code smell and that is that the dependency of the answer list (either as List or SharePoint) is documented clearly. In other words, it might want to be passed in into the constructor. Now you might even argue that the ball should have a property for the answers but rather a reference to the SharePoint (answer getter). This may seem a small point but are important because we need designs that scale – and decoupling has been proven to do so.

So, instead of this code:

var ball = new Ball();
var sharePoint = new SharePoint();

ball.Answers = sharePoint.GetAnswersFromList(ListName);
var answer = ball.AskQuestion("MyQuestion");

You are more likely to have if hand in the list:

var ball = new Ball(new SharePoint.GetAnswersFromList(ListName));
ball.Answers = sharePoint.GetAnswersFromList("Your answer");
var answer = ball.AskQuestion("MyQuestion");

Or if the SharePoint is a dependency:

var ball = new Ball(new SharePoint(ListName));
ball.Answers = sharePoint.GetAnswersFromList("Your answer");
var answer = ball.AskQuestion("MyQuestion");


What is at the heart of this sample is that it is trying to unit test everything. Instead it should split tests into unit and integration tests. I have documented elsewhere very specific definitions of unit and integration tests:

  • Unit – a test that has no dependencies (do our objects do the right thing, are they convenient to work with?)
  • Integration – a test that has only one dependency and tests one interaction (usually, does our code work against code that we can’t change?)

What would I do?

  • unit test the ball – there are a number of tests around the AskQuestion() method – does the randomness work, can I hand it a list of answers. If I am going to hand in the SharePoint class – I can then can handin a fakeSharePoint class for stubs and then mock the class to check that it actually calls the GetAnswersFromList() method.
  • integration test the SharePoint class – the integration test requires that SharePoint is up and running. This is a great integration test because it checks that you have all your ducks lined up for a SPContext call and that you have used the correct part of the API. Interestingly, the harder part of the integration tests are getting the setup context correct. Having to deal with the Setup (and Teardown) now is in fact one of the most important benefits of these types of integration tests. We need to remember that these integration APIs merely confirm that the API is behaving as we would expect in the context in which we are currently working. No more. No less.

In summary, this sample showing particular layering in the code but not in the testing. Layering your tests is as important. In SharePoint, we have had the most success in creating abstractions exclusively for SharePoint and testing these as integration tests. These SharePoint abstractions have also been created in a way that we can hand in mock implementations into other layers so that we can unit test the parts of the code that have logic. This is a simple design that effectively makes SharePoint just another persistence-type layer. There is a little more to it than that but that isn’t for here.

Let’s turn to Pex and Moles and see if it provides an option any better to TypeMock. I suspect not because the issue here isn’t the design of the test system – these both can intercept classes hidden and used somewhere else with my own delegates – but rather the design of our code. I’m hoping to use tools that help me write better more maintainable code – so far both of them look like the “easy code” smell.

Pex and Moles

So I’ve headed off to Unit Testing SharePoint Foundation with Microsoft Pex and Moles: Tutorial for Writing Isolated Unit Tests for SharePoint Foundation Applications. It is a great document that tells us why SharePoint is not a framework built with testability baked in.

The Unit Testing Challenge. The primary goal of unit testing is to take the smallest piece of testable software in your application, isolate it from the remainder of the code, and determine whether it behaves exactly as you expect. Unit testing has proven its value, because a large percentage of defects are identified during its use.
The most common approach to isolate production code in a unit test requires you to write drivers to simulate a call into that code and create stubs to simulate the functionality of classes used by the production code. This can be tedious for developers, and might cause unit testing to be placed at a lower priority in your testing strategy.

It is especially difficult to create unit tests for SharePoint Foundation applications because:

* You cannot execute the functions of the underlying SharePoint Object Model without being connected to a live SharePoint Server.

* The SharePoint Object Model-including classes such as SPSite and SPWeb-does not allow you to inject fake service implementations, because most of the SharePoint Object Model classes are sealed types with non-public constructors.

Unit Testing for SharePoint Foundation: Pex and Moles. This tutorial introduces you to processes and concepts for testing applications created with SharePoint Foundation, using:

* The Moles framework – a testing framework that allows you to isolate .NET code by replacing any method with your own delegate, bypassing any hard-coded dependencies in the .NET code.

* Microsoft Pex – an automated testing tool that exercises all the code paths in a .NET code, identifies potential issues, and automatically generates a test suite that covers corner cases.

Microsoft Pex and the Moles framework help you overcome the difficulty and other barriers to unit testing applications for SharePoint Foundation, so that you can prioritize unit testing in your strategy to reap the benefits of greater defect detection in your development cycle.

The Pex and Moles sample code works through this example. It shows you how to mock out all the dependencies required from this quite typical piece of code. Taking a quick look we have these dependencies:

  • SPSite returning a SPWeb
  • Lists on web
  • then GetItemById on Lists returning an SPListItem as item
  • SystemUpdate on item
public void UpdateTitle(SPItemEventProperties properties) 
{ 
	using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) 
	{ 
		SPList list = web.Lists[properties.ListId]; 
		SPListItem item = list.GetItemById(properties.ListItemId); 
		item["Title"] = item["ContentType"]; 
		item.SystemUpdate(false); 
	} 
}

Here’s a quick sample to give you a feeling if don’t want to read the pdf. In this code, the first thing to know is that we use a “M” prefix by convention to intercept classes. In this case, MSPSite intercepts the SPSite and then returns a MSPWeb on which we override the Lists and return a MSPListCollection.

string url = "http://someURL"; 

MSPSite.ConstructorString = (site, _url) => 
{ 
	new MSPSite(site) 
	{ 
		OpenWeb = () => new MSPWeb
		{ 
			Dispose = () => { }, 
			ListsGet = () => new MSPListCollection 

 ...
 

There is no doubt that Pex and Moles is up to the job of mocking these out. Go and look at the article yourself. Others have commented on getting use to the syntax and I agree that because I am unfamiliar it is not as easy as say Moq. But it seems similar to MSpec. That’s not my gripe. My grip is that the tool helps bake badness into my design. Take the example where the sample adds validation logic into the above code, the authors seem to think that this is okay.

public void UpdateTitle(SPItemEventProperties properties) 
{ 
	using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) 
	{ 
		SPList list = web.Lists[properties.ListId]; 
		SPListItem item = list.GetItemById(properties.ListItemId); 
		
		string content = (string)item["ContentType"]; 
		if (content.Length < 5) 
			throw new ArgumentException("too short"); 
		if (content.Length > 60) 
			throw new ArgumentOutOfRangeException("too long"); 
		if (content.Contains("\r\n")) 
			throw new ArgumentException("no new lines"); 
		item["Title"] = content; 

		item.SystemUpdate(false); 
	} 
}	

In the sample, described as “a more realistic example to test”, it just encourages poor separation when writing line of business applications. This is validation logic and there is no reason whatsoever to test validation alongside the need for data connections. Furthermore, there is little separation of concerns around exception throwing and handling and logging.

I’m therefore still frustrated that we are being shown easy to write code. Both Pex and Moles and TypeMock (regardless of how cool or great they are) are solutions for easy to write code. In this sample, we should see get separation of concerns. There are models, there are validators, there is the checking for validations and error handling. All these concerns can be written as unit tests and with standard libraries.

We will then also need integration tests to check that we can get list items. But these can be abstracted into other classes and importantly other layers. If we do that we will also avoid the duplication of code that we currently see with the using (SPWeb web = new SPSite(properties.WebUrl).OpenWeb()) code block. I am going to return to this in another post which is less about layering but about some specific SharePoint tactics once we have a layering strategy in place.

Categories: Uncategorized Tags: , ,

SharePoint deployment packaging

November 27th, 2010 No comments

In this blog post I discuss a strategy for adding migrations and scripting to the deployment of SharePoint solutions. What was implicit in their was that we were packing the SharePoint solutions (wsp file) into a release package that had scripting. I documented here the types of scripts we used in order to get automated deployments.

Taking a step back, the reasons we went down this approach was that in practice:

  1. in the first sprint, we lost 2.5 days times 3 developers worth of work alone in SharePoint API (funky-ness: eg poor error reporting, API bugs)
  2. developers are often waiting long periods of time for deployments
  3. when we did do “quick” deploys they were unreliable (ie pushing into the 14 hive), say with a javascript deployment (eg CKS.Dev.Server.vsix)

To do this, we had a take step back from delivering functions and delivering stability. This actually took us about a month to get under control – I wish I could send a particular corporation the bill for an under-finished product.

Therefore, beware. If you are reading this and this is new to you, you are likely to need to build in an initial theme over the first month or so: Stability of code base and its deployment. This is of course not a problem exclusive to SharePoint.

Approach:

  • map out the value stream for deployment through environment, identify pain points (result: scripting but not automation)
  • start scripted deployments
  • solve instability issues

Solutions:

  • versioned release packages – see below for the packing.proj msbuild tasks
  • scripted deployments
  • migrations as part of deployment
  • general rule of no

Results

  1. a standard deployment now takes between 1-3 mins (each scripted installation reports time taken)
  2. the biggest lag is still the bootstrapping process that SharePoint undergoes with a new deployment

package.proj

This packaging project is an adaption on the sample deployment project found here. In that sample, you’ll see the need for dependencies in the lib/ folder (such as 7zip for compression). Down in the script you might also notice that we include migratordotnet for managing releases which actually requires original binaries – I need to get back to this issue and try not to included binaries like this.

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="HelpBuild"  ToolsVersion="3.5" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
	    <Major>0</Major>
	    <Minor>1</Minor>
	    <Build>0</Build>
	  </PropertyGroup>

  <PropertyGroup>
    <Revision Condition="'$(Revision)'==''">0</Revision>
    <Version Condition="'$(Version)'==''">$(Major).$(Minor).$(Build).$(Revision)</Version>
    <DropLocation Condition="'$(DropLocation)'==''">$(MSBuildProjectDirectory)\CodeToDeploy\Publish</DropLocation>
    <BuildCmd Condition="'$(BuildCmd)'==''">ReBuild</BuildCmd>
    <ReleaseEnvironment Condition="'$(ReleaseEnvironment)'==''">Test</ReleaseEnvironment>


    <ReleaseName>MySites-$(ReleaseEnvironment)</ReleaseName>
    <ReleasePath>$(DropLocation)\..\Releases</ReleasePath>
    <DropLocationWsp>$(DropLocation)\wsp</DropLocationWsp>
    <BinariesRoot>$(MSBuildProjectDirectory)\src\Site</BinariesRoot>
    <LibRoot>$(MSBuildProjectDirectory)\lib</LibRoot>
    <WspRoot Condition="'$(WspOutDir)'==''">$(BinariesRoot)</WspRoot>
    <ReleaseZipFile>$(ReleasePath)\Site-$(Version).zip</ReleaseZipFile>

    <ExtractPath>$(DropLocation)\..\Deploy</ExtractPath>

    <Zip>$(MSBuildProjectDirectory)\lib\7z\7z.exe</Zip> 
  </PropertyGroup>

  <ProjectExtensions>
    <Description>Build Releasable MySites</Description>
  </ProjectExtensions>

  <ItemGroup>
    <SolutionToBuild Include="src\Site.sln">
      <Properties>Configuration=Release;Platform=x64;OutDir=$(WspRoot)\bin\x64\Release\</Properties>
    </SolutionToBuild>
    <WspToBuild Include="$(WspRoot)\UI.csproj">
      <Properties>Configuration=Release;Platform=x64;OutDir=$(WspRoot)\bin\x64\Release\</Properties>
    </WspToBuild>
    <WspFiles Include="$(BinariesRoot)\**\*\*.wsp" />
  </ItemGroup>

  <ItemGroup>
	  
    <TasksFiles Include="scripts\deploy.ps1" />
    <TasksFiles Include="scripts\deploy\migrations.ps1;
                         scripts\deploy\deploy.ps1;
                         scripts\psake.psm1;
                         scripts\deploy\install.ps1" />
    
    <MigrationFiles Include="$(WspRoot)\bin\x64\Release\Infrastructure.dll;
                             $(WspRoot)\bin\x64\Release\Domain.dll" /> 
    <MigratorFiles Include="$(LibRoot)\migratordotnet\*" /> 
  </ItemGroup>

  <Target Name="Package" DependsOnTargets="Clean;Version-Writeable;Version;Compile;Version-Reset;Version-ReadOnly;Publish;Zip"/>
  <Target Name="Install" DependsOnTargets="Package;Extract;Deploy"/>

  <Target Name="Compile">
    <MSBuild Projects="@(SolutionToBuild)" Targets="Rebuild" />
    <CallTarget Targets="PackageWsp" />
  </Target>

  <Target Name="PackageWsp">
    <MSBuild Projects="@(WspToBuild)" Targets="ReBuild;Package" />
  </Target>

  <Target Name="Publish">
    <MakeDir Directories="$(DropLocationWsp)" Condition = "!Exists('$(DropLocationWsp)')" />
    
    <Copy SourceFiles="@(WspFiles)" DestinationFolder ="$(DropLocationWsp)" SkipUnchangedFiles="true"/>
    <Copy SourceFiles="@(TasksFiles)" DestinationFolder ="$(DropLocation)\scripts\%(TasksFiles.RecursiveDir)" SkipUnchangedFiles="true" />
    
    <Copy SourceFiles="@(MigrationFiles)" DestinationFolder ="$(DropLocation)\lib\%(MigrationFiles.RecursiveDir)" SkipUnchangedFiles="true" />
    <Copy SourceFiles="@(MigratorFiles)" DestinationFolder ="$(DropLocation)\lib\migratordotnet\%(MigratorFiles.RecursiveDir)" SkipUnchangedFiles="true" />
    
    <Exec Command="echo .> &quot;$(DropLocation)\$(Version)&quot;"/>
    
    <Exec Command="echo powershell -ExecutionPolicy Unrestricted -Command &quot; &amp;{import-module .\scripts\psake.psm1; Invoke-Psake Install}&quot; > &quot;$(DropLocation)\Install.bat&quot;"/>
    <Exec Command="echo powershell -ExecutionPolicy Unrestricted -NoExit -Command &quot; &amp;{import-module .\scripts\psake.psm1; Invoke-Psake -docs}&quot; > &quot;$(DropLocation)\here.cmd&quot;"/>  
    <Exec Command="echo Include .\scripts\migrations.ps1; Include .\scripts\deploy.ps1; Include .\scripts\install.ps1 > &quot;$(DropLocation)\default.ps1&quot;"/>  
  </Target>

  <Target Name="Clean" DependsOnTargets="CleanPublish;CleanReleases" >
    <RemoveDir Directories="$(DropLocation)\.." />
  </Target>

  <Target Name="CleanPublish">
    <RemoveDir Directories="$(DropLocation)" />
  </Target>

  <Target Name="CleanReleases">
    <RemoveDir Directories="$(ReleasePath)" />
    <RemoveDir Directories="$(ExtractPath)" />
  </Target>

  <Target Name="Zip">
    <MakeDir Directories="$(ReleasePath)" Condition = "!Exists('$(ReleasePath)')" />
    <Exec Command="$(Zip) a -tzip %22$(ReleaseZipFile)%22" WorkingDirectory="$(DropLocation)"/>
  </Target>

  <Target Name="Extract">
    <MakeDir Directories="$(ExtractPath)"  Condition = "!Exists('$(ExtractPath)')"/>
    <Exec Command="$(Zip) x %22$(ReleaseZipFile)%22 -o$(ExtractPath)" WorkingDirectory="$(DropLocation)"/>
  </Target>

  <Target Name="Deploy">
    <Exec Command="$(ExtractPath)\deploy.bat $(ExtractPath)"  WorkingDirectory="$(ExtractPath)" ContinueOnError="false" />
  </Target>


  <Target Name="HelpBuild">
    <Message Text="

    msbuild /t:Package

    Examples:
      msbuild /t:Install
      msbuild /t:Install /p:ReleaseEnvironment=Test
      msbuild @build.properties /t:Package /v:d 

    Variables that can be overridden:
      DropLocation=C:\Binaries
      ReleaseEnvironment=Dev|[Test]|Prod
      BuildCmd=Build|[Rebuild] 

    Targets:
     - Compile
     - Clean
     - CleanReleases
     - Publish
     - Zip
     - Extract
     - Deploy
     - PackageWsp

     - Package (Compile;Clean;Publish;Zip)
     - Install (Compile;Clean;Publish;Zip;Extract;Deploy)

    Log output: msbuild.log
             " />
  </Target>

</Project>	
Categories: Uncategorized Tags: , ,

sharepoint-deployment-scripting-with-powershell

November 26th, 2010 No comments

PowerShell scripting approach for SharePoint (via psake)

This is a simple plea to SharePointers. Can we please have a decent build runner around PowerShell scripts in the deployment environment. Most developers in other frameworks do. So I thought that I would outline some basic techniques. psake is as good as any other options (except rake which I still think is vastly superior but I don’t want the operational hassle of needing it in other environments). Here’s what I’m doing:

  • powershell wrapped in psake
  • modularised scripts
  • psake helpers scripts for the real world (and documented in other places)
  • helper scripts for each of access to command line powershell

One caveat, psake has a couple of limitations that make it otherwise a good rake port:

  • it doesn’t namespace (AFAIK) tasks and that means it effectively works in the global namespace – this means we get conflicts and have to resort to poor naming practices
  • it doesn’t create good documentation for nested scripts – that seriously sucks – I want to be able to do -docs and read what I can do
  • it really doesn’t do logging well when moving across scripts (Start-Transcript seems to suck – but that might be my knowledge)

If you don’t go down this road what you end up with is a load of powershell scripts that we double click on. That’s not my idea of well structured or maintainable. Skip straight down to deploy.ps1 if want to see what the scripts look like rather than all the plumbing. And, yes, there is way too much plumbing! Here is the script where I actually do the packaging up of these files.

File layout

default.ps1
here.cmd
deploy.cmd
scripts/
    psake.psm1
    psake1.ps1
    deploy.ps1
    install.ps1
wsp/
    MySolution.wsp

Quick explanation:

  • psake1.ps1, psake.psm1 – both come with psake. psm is a module and the make DSL engine – go there to understand what to do. psake1 is a help wrapper.
  • default.ps1 is a manifest that links all your psake tasks – psake looks for default.ps1 by, well, default.
  • deploy.cmd is a GUI-based wrapper that invoke a specific tasks (deploy in this case)
  • here.cmd brings up a command line interface so that I can use all the psake tasks (eg Invoke-psake Deploy)
  • deploy.ps1, install.ps1 are my two specific sets of tasks – deploy has SharePoint cmdlets linked in here (install is a further wrapper that does logging and in my own case also invokes other scripts I haven’t included like migrations)

How to use

Option One:

  1. Click deploy.cmd

Option Two:

  1. Click here.cmd – this will list out the available commands
  2. Be able to type in powershell commands
Invoke-psake Deploy

Files

deploy.cmd

powershell -ExecutionPolicy Unrestricted -NoExit -Command "&{import-module .\scripts\psake.psm1; Invoke-Psake Install}"

default.ps1

Include .\scripts\deploy.ps1; 
Include .\scripts\install.ps1

here.cmd

powershell -ExecutionPolicy Unrestricted -NoExit -Command " &{import-module .\scripts\psake.psm1; Invoke-Psake -docs}"

psake.ps1

import-module .\scripts\psake.psm1

Start-Transcript -Path .\install.log
invoke-psake @args
Stop-Transcript

remove-module psake

deploy.ps1

$framework = '3.5x64'
Properties {
  $base_dir = resolve-path .
    $solution = "MySolution.wsp"
    $application = $null
    $path = "$base_dir\wsp\$solution"
    $SolutionFileName = ""
}

Task Deploy -Depends Solution-Setup, Solution-UnInstall, Solution-Remove, Solution-Add, Solution-Install, Solution-TearDown

Task Solution-Remove -Depends Solution-Wait{
  Write-Host 'Remove solution'
  Remove-SPSolution ñidentity $solution -confirm:$false
}

Task Solution-Add {
  Write-Host 'Add solution'
  Add-SPSolution -LiteralPath $path
}

Task Solution-Install {
  Write-Host 'install solution'
  if($application -eq $null)
  {
      Install-SPSolution -identity $solution -GACDeployment -force
  }
  else
  {
      Install-SPSolution -identity $solution -application $application -GACDeployment -force
  }
}

Task Solution-UnInstall {
  Write-Host 'uninstall solution'
  if($application -eq $null)
  {
     Uninstall-SPSolution -identity $solution -confirm:$false
  }
  else
  {
     Uninstall-SPSolution -identity $solution -application $application -confirm:$false
  }  
}

Task Solution-Setup {
  Add-PsSnapin Microsoft.SharePoint.PowerShell
}

Task Solution-TearDown -Depends Solution-Wait {
  Remove-PsSnapin Microsoft.SharePoint.PowerShell
}

Task Solution-Wait {
  $JobName = "*solution-deployment*$SolutionFileName*"
    $job = Get-SPTimerJob | ?{ $_.Name -like $JobName }
    if ($job -eq $null) 
    {
        Write-Host 'Timer job not found'
    }
    else
    {
        $JobFullName = $job.Name
        Write-Host -NoNewLine "Waiting to finish job $JobFullName"

        while ((Get-SPTimerJob $JobFullName) -ne $null) 
        {
            Write-Host -NoNewLine .
            Start-Sleep -Seconds 2
        }
        Write-Host  "Finished waiting for job.."
    }
}

install.ps1

Task Install -Depends Logging-Start, Deploy, Logging-Stop

Task Logging-Start {
  Start-Transcript -Path .\install.log
}

Task Logging-Stop {
  Stop-Transcript
}
Categories: Uncategorized Tags: , , ,