Android Running slow on Phone and Messages

April 11th, 2012 No comments

I just spent the weekend trying to fix my Android (HTC Sensation). It was running so slow that I got to a point that I could phone out, pick up my phone or write text messages. My battery life was under 2 hours. How bad is that? Actually, I then found that I had a similar problem on my mac when using Thunderbird. The solution was simple enough.

I had 20,000 contacts between my various accounts. There is then a practical limit of contacts on my Android – even when I got it down to 5,000 I was still finding problems.

Symptoms: Low battery life, high CPU usage, unresponsive Messages and Phone app.
Diagnosis:

# Open task manager. You should see User Dictionary using a high level of CPU %.
# Open People application. Select View. You should see a high level of contacts.

Simple solution: have less contacts.

Actual problem: I have multiple google accounts (personal and couple of work accounts). Somehow two accounts had replicated some users 5,000 times! That is, I have 19,500 too many contacts. It appears to be the joy of Google+ (which I have never wanted in the first place).

Solution: Remove all duplicate users (in this case my contacts were perhaps corrupted because google couldn’t detect any duplicates). That’s right, I had to delete/merge 19,500 contacts 250 contacts a time. Then I had to make sure that I didn’t resync either my Thunderbird or Android.

End result. A working Android, a working Thunderbird and a day of my life lost.

Note: I actually did a factory reset which I didn’t need to do. What I also now have is a very nicely ordered contact list and I know that my backup is working.

Categories: Uncategorized Tags:

Kanban Simulator

February 12th, 2012 No comments

I have decided to release out into the wild a kanban simulator. It’s been a long time in the making and really I’m wondering what to use it for!

It started out as a project to think through (a) some of the calculations and relationships in a kanban system and (b) a single-page, javascript application. With the later, it has been a great exercise. I have found the AMD and requirejs to be a great solution. I combined it with building via node.js and did test-first in jasmine. My last results from jasmine are: 131 specs, 0 failures in 4.8s.

Anyway, this is a kanban board simulator that allows you to setup a board and then run tasks through and look at metrics.

I’m wondering if it has any use as a training mechanism. Trainers could get participants to generate and run their own data. I haven’t written up instructions on changing the configuration but if you download the code and update the scripts/config/simple-board.js file you might get the hang of it. You’ll also see that I have created a small DSL for creating “Team practices” but that’s another story.

Categories: Uncategorized Tags:

Pair programming: some good reminders

January 10th, 2012 1 comment

I have been lucky enough to have a pairing session with a very experienced programmer lately. In my day job, I pair all the time. But it is usually with experienced, yet resistant programmers and very rarely is it rewarding for me. This experience has been the opposite and a good reminder of why I do like pairing and want to do more of it collegially. Usually, I am pairing either as the experienced half or a coach but rarely on equal(-ish) footing. What a treat! Some background, we have been writing a graph database in Scala and are now four days in – I usually write line-of-business applications rather than tool libraries. We started with a list of features (XP style) and we have been TDDing, test-first all of the code. We also have been using a BDD-style Given/When/Then style test suite in JUnit. Mostly we have been mocking with occasional classical-TDD assertions and one fake implementation.

Here are some notes to myself that are related between pairing itself and problems we discussed.

Technical debt still grows quickly

We just spent most of day four dealing with the decisions to move on. I couldn’t believe how many issues where in the code that we had marked TODO which at the time allowed us to move on. Each small decision (if I remember correctly) was very good at the time but to have some of these come and bite us by day four was amazing. Note to myself we should actually have a running count of the number of TODOs and graph it or put some upper limit.

Be careful about code (and domain concepts) not integrated into core

This is probably my greatest learning and reminder. Hopefully, I can explain. There was a point that we decided to write the code that serialised byte arrays to disk. Neither of us had a working knowledge of the libraries needed so we decided to reduce risk in this area. We effectively spiked this code albeit with tests. It was all production-ready code. However, it was integrated into the core code base. We called this programming on a leaf domain concept – maybe orphaned leaf might be more appropriate. The leaf is on the outer of the domain concepts. To make this more concrete, the core manages all the nodes and relationships in the database, then we manage persistence and flushing and finally out on the ends we read/write to disk. We working on the disk read/write and yet hadn’t integrated into the persistence code that would call it. The problem was that as we started to add functionality to the code code, the leaf code became hard to work with. In the end we treated the original leaf code as a spike and coded from the core to integrate the leaf. I think this is closer to the style of GOOS.

Pairing changes the rhythm (and rhyme) of the day

The rhythm is generally evenly paced throughout the day – often it feels slow and plodding. Rarely, do I quite get the excitement of individual programming when I feel on a roll whacking out the code – but neither do we get the lows and stagnation. Instead, there is much discussion, whiteboarding and mostly the coding revolves around writing the tests (particular with mocks spelling out the interactions and responsibilities). We ping-ponged between test writing for one person and then the other passing the test and starting the next failed test before handing back to the other person to pass the tests and start all over (with refactoring on green by either). Our day was between 6 and 10 elapsed hours with an hour off for lunch. It was sustained, engaging work. At the end of the day we were surprised how many features were ticked off. Slow and steady was the rhyme;Ping-pong, red-green-refactor was the rhythm.

Domain confusions are still the biggest wasters of time

Our biggest waster was not knowing how to do something and this was worst when we had confusions already existing in the code. Particularly on day four we had to refactor them out of the code. We made best progress when we had an even understanding between us – and this occurred when we had concrete examples mapped out on the whiteboard (specification by example). So I’m still not worried about productivity arguments about doubling up in pairing – that might be the case with trivial implementations

Deal with complexity through examples to build simplicity in implementation

Complexity here is the interaction (and side effects) between parts of system. We did too much of this through discussion and holding ideas in our head and trying to isolate complexity through separation and encapsulation. For example, a test might isolate the caching and dirty-tracking mechanism for nodes but often to understand that it was talked about in a wider context. I think that we could have used examples that work through the system to encapsulate the complexity rather than keep it limited to our discussions. This sounds to me that it is a mixture of specification by example and GOOS-style tests. Sorry if that isn’t very clear.

Live with your own confusions and ambiguity but be careful about letting them into the code base

These are related to above. There were times that I was willing to live with ambiguity and confusion and my pairing partner knew what he was doing – so why wouldn’t I go with the flow?! A couple of these places turned out to be poor abstractions that took us time to refactor. Perhaps, I will be bolder next time. I think I would insist on more concrete end-to-end examples (rather than simply tests)
h3. Multiple assertions in a test

Because we are using mocks, of course, there are always multiple assertions. Nonetheless, we always had to ensure that we were describing what the system did and focus on one aspect. But this was just a good reminder asserting a single thing is not the same as only making one assertion. The code was more readable and in fact we spent time wrapping jmock to get readable, concise assertions.

Symmetry-style tests are a really useful type of test

I haven’t explicitly used symmetry-style tests within one test before. I usually do across a test class. In our case, we needed to serialise and deserialise byte-arrays of datas (numbers, strings, properties). Usually, I would separate out these tests (also one per class and here we had a reader and writer) and then combine using a Setup. We just put them in the same test, it was far more readable and appropriate to the level of complexity of the task

Fake implementations are markers for debt and rework

I often use fake, concrete implementations in tests to get me going. I see them as a way to keep moving. We were acutely reminded that they are debt in the system. We found that in the implementation had gone too far and we used it in more than one test class. As such, we we came to take it out it touched a number of tests that we needed to refactor. It had also stopped us seeing some assumptions. The note to myself is to make sure that such implementations are private to the tests.

CI & source control even for the single-programmer environment is gonna save you

Because we are working in Scala some of the IDE support is not quite there. We lost time without CI. It’s there now – well, almost

Pairing with you reminds you (acutely) of accommodations you make and your own limitations

Pairing is a good mirror – your get to look back at yourself some quietly and sometime not so! All those little things that I do are accentuated: the little key presses I don’t remember, the quirky naming conventions, the IDE features I could improve. I love the little reminders to keep improving.

Categories: Uncategorized Tags:

Effect maps

October 29th, 2011 No comments

I just read a copy of Gojko Adic’s Agile Product Management using Effect Maps because I too still find high-level product visualisation problematic when trying to do iterative product management. Adzik explains “Effect Maps” (which are structured mind-maps) which allow for design, priorisation and scoping. It well worth the read. Here’s a few thoughts I had while reading:

Simple technique for a working meeting

It brings people together and always has at its base of discussion to the business value/problem to be solved. The mindmap starts from the question of the goal: Why are we doing this? What is the desired business change? I could see structuring early discussions with senior people. Likewise, I could all stakeholders in the room and use the technique.

How to get see the whole and defer ambiguity

The four-levels in the mindmap is detailed enough to move through the Why, Who, How and then detailed What. But in practice, there is loads of detail still to come in the What (the features) that the team will need to work. Be able to defer with the implementation details to when starting on the specific is an important strategy to avoid wasteful work and delays.

Good for Target Cost development-type contracts

I tend to work on Target Cost contracts but always struggle with what that end goal and target cost is. Few can articulate what their break-even (or often opportunity) cost is. The effect map should provide a good way to identify needs and the potential investment required. Usually, in the end, while I might suggest a cost clients also usually have a sense of what they can afford against there (often implicit) view of its value. Interestingly, investment is the correct vernacular over cost. Most of my clients want to capitalise my work rather than pay for it out of operational budget.

Avoids lists of stories

If you really wanted to get me started of a rant then start sputing that user stories are the technique to counter requirements documents. In my experience, this assumption always leads you back to the same root-cause analysis: the analysis wasn’t refined enough (ie the story was too big, not well defined, could have been broken-up more). Although all those reasons are true, I tend to see that there wasn’t enough context because the stories while grouped (into themes) they really didn’t have good prioritisation, scoping at a higher level. Or as he puts it, user stories are being used for managing long-term release planning. The effect map helps by not using the user stories at level. This is nice because it keeps in the front of our mind that user stories are a merely rubric, albeit a very useful one. The effect helps us keep user stories at the outmost leaves of the map and not toward the root. As he says,

User stories are de facto standard today for managing long term release planning. This often includes an “iteration zero”, a scoping exercise or a user story writing workshop at the start of a milestone. During the “iteration zero” key project sponsors and delivery team together come up with an initial list of user stories that will be delivered. A major problem with the “iteration zero” approach is the long stack of stories that have to be managed as a result of it. Navigating through hundreds of stories isn’t easy. When priorities change, it is hard to understand which of the hundreds of items on the backlog are affected. Jim Shore called this situation “user story hell” during his talk at Oredev 2010, citing a case of a client with 300 stories in an Excel spreadsheet. I’ve seen horror stories like that, perhaps far too often.

Categories: Uncategorized Tags: ,

Having Jasmine tests results in TeamCity via node.js (on windows) invoked from powershell

September 26th, 2011 2 comments

I test my javascript code via jasmine on a windows machine. I primarily write jquery-plugin-style code. Now I need to get this onto CI. A colleague I worked with took my test strategy in jasmine and wrote a library to run it very quickly on node.js. There are other places showing how to integrate with TeamCity and Jasmine with JsTestDriver or Qunit and also the documentation on how to easily integrate service messages with TeamCity.

One caveat: I am not wanting to test cross-browser functionality. Therefore, I don’t need or want a browser or the associated slowness and the brittleness of cross-process orchestration. (Note: I have tried to stabilise these types of tests using NUnit and MSTest runners invoking selenium and/or watin – it gets unstable quickly and there is too much wiring up.)

So this approach is simple and blindly fast thanks to Andrew McKenzie’s jasmine-node-dom which is an extension of jasmine-dom. He wrote his for linux and his example is with Jenkins so I have forked a version for windows which has a node.exe binary which is available form node.js.

Anyway, this blog covers the powershell script to invoke the jasmine-node-dom and publish it to TeamCity.

Here’s the script:

build.ps1 (or directly in TeamCity)

	
	$node_dir = "node-jasmine-dom\bin"	

	& "$node_dir\node.exe" "$node_dir\jasmine-dom" `   
				--config tests.yaml `
				--format junit `
				--output javascript-results.xml 

	write-host "##teamcity[importData type='junit' path='javascript-results.xml']"    
	

An explanation if it isn’t obvious. First let’s start with files that are needed. I have the windows node-jasmine-dom installed in its own directory. I then call node.exe with jasmine-dom. That should just work all out-of-the-box. I then tell it where the manifest is that knows about the tests (tests.yaml – see below for example) and then I give it the results file. jasmine-node-dom is great because it reads the SpecRunner.html and reconstructs the DOM enough that the tests are valid.

Finally, I tell teamcity to read the results out of junit. This is very easy and I recommend that you find out what else you need to do.

tests.yaml

	---
	  test_one:
	    name: Example test one
	    runner: ./tests/SpecRunner.html

This yaml file points to the Jasmine runner.

Other points:

* All my jasmine tests are invoked from their own SpecRunner.html file by convention
* I will write a script that will automatically generate the yaml file
* I always put all my powershell scripts into psake scripts (then they can be run by the dev or the build machine)
* my code isn’t quite filed as above

Summary Instructions:

# download jasmine-node-dom and install in tools\ directory
# add new Task to build scripts (I use psake)
# add a new test.yaml manifest (or build one each time)
# Add new jasmine tests via SpecRunner.html with your javascript
# Ensure that the build script is run via a build step in a configuration from TeamCity
# Now you can inspect the Tests in TeamCity

Categories: Uncategorized Tags: , , , , ,

Sharepoint & TDD: Getting started advice

July 1st, 2011 4 comments

I have a couple of people asking lately about starting on SharePoint. They’ve asked about how to move forward with unit and integration testing and stability. No one wants to go down the mocking route (typemock, pex and moles) and quite rightly. So here’s my road map:

The foundations: Hello World scripted and deployable without the GUI

  1. Get a Hello World SharePoint “app” – something that is packageable and deployable as a WSP
  2. Restructure the folders of the code away from the Microsoft project structure so that the root folder has src/, tools/, lib/ and scripts/ folders. All source and tests are in src/ folder. This lays the foundation for a layered code base. The layout looks like this sample application
  3. Make the compilation, packaging, installation (and configuration) all scripted. Learn to use psake for your build scripts and powershell more generally (particularly against the SharePoint 2010 API). The goal here is that devs can build and deploy through the command line. As such, so too can the build server. I have a suggestion here that still stands but I need to blog on improvements. Most notably, not splitting out tasks but rather keeping them in the same default.ps (because -docs works best). Rather than get reuse at the task level do it as functions (or cmdlets). Also, I am now moving away from the mix with msbuild that I blogged here and am moving them into powershell. There is no real advantage other than less files and reduced mix of techniques (and lib inclusions).
  4. Create a build server and link this build and deployment to it. I have been using TFS and TeamCity. I recommend TeamCity but TFS will suffice. If you haven’t created Build Definitions in TFS Workflow allow days-to-weeks to learn it. In the end, but only in the end, it is simple. Becareful with TFS, the paradigm here is that build server does tasks that devs don’t. It looks a nice approach. I don’t recommend it and there is nothing here by design that makes this inevitable. In TFS, you are going to need to build two build definitions: SharePointBuild.xaml and SharePointDeploy.xaml. The build is a compile, package and test. The deploy simply deploys to an environment – Dev, Test, Pre-prod and Prod. The challenge here is to work out a method for deploying into environments. In the end, I wrote a simple self-host windows workflow (xamlx) that did the deploying. Again, I haven’t had time to blog the sample. Alternatively, you can use psexec. The key is that for a SharePoint deployment you must be running on the local box and the most configurations have a specific service account for perms. So I run a service for deployment that runs under that service account.

Now that you can reliably and repeatably test and deploy, you are ready to write code!

Walking Skeleton

Next is to start writing code based on a layered strategy. What we have found is that we need to do two important things: (1) always keep our tests running on the build server and (2) attend to keeping the tests running quickly. This is difficult in SharePoint because a lot of code relates to integration and system tests (as defined by test automation pyramid). We find that integration tests that require setup/teardown of a site/features get brittle and slow very quickly. In this case, reduce setup and teardown in the the system tests. However, I am also had a case where the integration test showed that a redesigned object (that facaded SharePoint) would give better testability for little extra work.

  1. Create 6 more projects based on a DDD structure (Domain, Infrastructure, Application, Tests.Unit, Tests.Integration & Tests.System). Also rename your SharePoint project to UI-[Your App], this avoids naming conflicts on a SharePoint installation. We want to create a port-and-adapters application around SharePoint. For example, we can wrap property bags with repository pattern. This means that we create domain models (in Domain) and return them with repositories (in Infrastructure) and can test with integration tests.
  2. System tests: I have used StoryQ with the team to write tests because it allows for a setup/teardown and then multiple test scenario. I could use SpecFlow or nBehave just as easily.
  3. Integration tests: these are written classical TDD style.
  4. Unit tests: these are written also classical TDD/BDD style
  5. Javascript tests: we write all javascript code using a jQuery plugin style (aka Object Literal) – in this case, we use JSSpec (but I would now use Jasmine) – we put all tests in Tests.Unit but the actual javascript is still in the UI-SharePoint project. You will need two sorts of tests: Example for exploratory testing and Specs for the jasmine specs. I haven’t blogged about this and need to but is based on my work for writing jQuery plugins with tests.
  6. Deployment tests: these are tests that run once that application is deployed. You can go to an ATOM feed which returns the results of a series of tests that run against the current system. For example, we have the standard set with tells us the binary versions and which migrations (see below) have been applied. Others check whether a certain wsp has been deployed, different endpoints are listening, etc. I haven’t blogged this code and mean to – this has been great for testers to see if the current system is running as expected. We also get the build server to pass/fail a build based on these results.

We don’t use Pex and Moles. We use exploratory testings to ensure that something actually works on the page

Other bits you’ll need to sort out

  • Migrations: if you have manual configurations for each environment then you’ll want to script/automate this. Otherwise, you aren’t going to be one-click deployments. Furthermore, you’ll need to assume that each environment is in a different state/version. We use migratordotnet with a SharePoint adapter that I wrote – it is here for SharePoint 2010 – there is also a powershell runner in the source to adapt – you’ll need to download the source and compile. Migrations as an approach works extremely well for feature activation and publishing.
  • Application Configuration: we use domain models for configuration and then instantiate via an infrastructure factory – certain configs require SharePoint knowledge
  • Logging: you’ll need to sort of that Service Locator because in tests you’ll swap it out for Console.Logger
  • WebParts: can’t be in a strongly typed binary (we found we needed another project!)
  • Extension Methods to Wrap SharePoint API: we also found that we wrapped a lot of SharePoint material with extension methods

Other advice: stay simple

For SharePoint developers not used to object oriented programming, I would stay simple. In this case, I wouldn’t create code with abstractions that allowed you to unit test like this. I found in the end the complexity and testability outweighed the simplicity and maintainability.

Microsoft itself has recommended the Repository Pattern to facade the SharePoint API (sorry I can’t for the life of me find the link). This has been effective. It is so effective we have found that we can facade most SharePoint calls in two ways: a repository that returns/works with a domain concept or a Configurator (which has the single public method Process()).Anymore than that it was really working against the grain. All cool, very possible but not very desirable for a team which rotates people.

Repository Pattern and SharePoint to facade PropertyBag

May 23rd, 2011 No comments

Introduction

Microsoft Patterns and Practices recommend the facading of SharePoint with the repository pattern. If you are an object-oriented programmer that request is straightforward. If your not then it isn’t. There are few examples of this practice and most code samples in SharePoint work directly with the API and SharePoint is scattered throughout the entire code base. If you haven’t read much about this there is a good section in Freeman and Pryce (Growing Object Oriented Software: Guide by Tests) about this approach – they relate this approach back to Cockburn’s ports and adapters and Evans’ Anti-corruption layer. I personally think about it as an anti-corruption layer.

In this example, I will give two examples of how we will avoid SharePoint having too much reach into codebase when using Properties. If we were to not use this solution the case base would be very EASY. Whenever we want a value we would use this code snippet: SPFarm.Local.Properties[key].ToString() (with some Security.RunWithElevatedPrivileges). Using this approach, at best we are likely to see the key as a global constant in some register of keys.

This type of code does not fit the Freeman and Pryce mantra to prefer to write maintainable code over code that is easy to write. Maintainable code has separation of concerns, abstractions and encapsulation – this is also testable code. So in the end in this example, what you’ll see is a lot more code but what’ll you also hopefully appreciate is that we are teasing out domain concepts where SharePoint happens only to the technical implementation.

So, the quick problem domain. We have two simple concepts: a site location and an environment. We have decided that our solution requires both of these pieces of information to be stored in SharePoint. In this case, we have further decided (rightly or wrongly – possibly wrongly) that we are going to let a little bit of SharePoint leak in that both a site location and environment as really property bag values – we make this decision because the current developers think it is easier in the long run. So, we decided against the EASY option.

Easy Option

Create a register:

public class EnvironmentKeys {
  public const string SiteLocationKey = "SiteUrl";
  public const string EnvironmentKey = "Environment";
}

Access it anytime either get:

  var siteUrl = SPFarm.Local.Properties[SiteLocationKey] 

Or update:

  SPFarm.Local.Properties[SiteLocationKey] = "http://newlocation/";
  SPFarm.Local.Update();  // don't worry about privileges as yet

Maintainable option

We are going to create two domain concepts: SiteLocation and Environment both of which are PropertyBagItem and that it will be fronted by a PropertyBagRepository that will allow us to Find or Save. Note: we’ve decided to be a little technology bound because we are using the notion of a property bag when we could just front each domain concept with respository. We can always refactor later – the other agenda here is getting SharePoint devs exposure to writing code using generics.

Here are our domain concepts.

Let’s start with our property bag item contract:

public abstract PropertyBagItem
{
  abstract string Key { get; }
  abstract string Value { get; set; }
}

It has two obvious parts: key and value. Most important here is that we don’t orphan the key from the domain concept. This allows us to avoid the problem of a global register of keys.

And let’s have a new SiteLocation class.

public class SiteLocation : PropertyBagItem
{
  public string Key { get { return "SiteLocationKey"; } }
  public string Value { get; set; }
}

Now, let’s write a test for finding and saving a SiteLocation. This is a pretty ugly test because it requires one being set up. Let’s live with it for this sample.

[TestFixture]

public class PropertyBagItemRepositoryTest
{
  private PropertyBagItemRepository _repos;

  [SetUp]
  public void Setup()
  {
    _repos = new PropertyBagItemRepository();
    _repos.Save(new SiteLocation("http://mysites-test/"));
  }

  [Test]
  public void CanFind()
  {
    Assert.That(_repos.Find<SiteLocation>().Value, Is.EqualTo("http://mysites-test/"));
  }

} 

Now, we’ll look at a possible implementation:

public class PropertyBagItemRepository
{
  private readonly Logger _logger = Logger.Get();

  public T Find<T>() where T : PropertyBagItem, new()
  {
    var property = new T();
    _logger.TraceToDeveloper("PropertyBagItemRepository: Finding key: {0}", property.Key);
    return Security.RunWithElevatedPrivileges(() =>
        {
          if (SPFarm.Local.Properties.ContainsKey(property.Key))
          {
            property.Value = SPFarm.Local.Properties[property.Key].ToString();
            _logger.TraceToDeveloper("PropertyBagItemRepository: Found key with property {0}", property.Value);
          }
          _logger.TraceToDeveloper("PropertyBagItemRepository: Unable to find key: {0}", property.Key);
        return property;
        });
  }
}

That should work and we could then add more tests and an implementation for the Save which might look like – I prefer chaining so I return @T@:

public T Save<T>(T property) where T : PropertyBagItem
{
  _logger.TraceToDeveloper("PropertyBagValueRepository: Save key: {0}", key);
  Security.RunWithElevatedPrivileges(() =>
  {
    SPFarm.Local.Properties[key] = property.Value;
    SPFarm.Local.Update();
  });
  return property;
}

Finally, let’s look at our next domain concept the environment. In this case, we want to enumerate all environments. So, we’ll write our integration test (yes, we should have a unit test for this domain concept first):

[Test]
public void CanFindEnvironment()
{
  Assert.That(new PropertyBagItemRepository().Find<Environment>().Code, Is.EqualTo(Environment.EnvironmentCode.DEVINT));
}

And now we can see that the implementation is a little more complex than the SiteLocation but that we can encapsulate the details well enough – actually, there is some dodgy code but the point is to illustrate that we need to keep environment logic, parsing and checking altogether:

public class Environment : PropertyBagItem
{
  public enum EnvironmentCode { PROD, PREPROD, TEST, DEV }

  public string Key { get { return "EnvironmentKey" } }
  private EnvironmentCode _code;
  public string Value 
  { 
    get { return Enum.GetName(typeof (EnvironmentCode), _code); }
    set { _code = value; }
  }

  
  public Environment(EnvironmentCode code)
  {
    Code = code;
  }

  public Environment(string code)
  {
    Code = Parse(code);
  }
  
  public Environment() : this(EnvironmentCode.DEV) // new() constraint on Find<T> requires parameterless constructor
  {
  }

  public static EnvironmentCode Parse(string property)
  {
    try
    {
      return (EnvironmentCode)Enum.Parse(typeof(EnvironmentCode), property, true);
    }
    catch (Exception)
    { 
      return EnvironmentCode.DEV;
    }
  }
}

It wasn’t that much work really was it?

Categories: Uncategorized Tags: , ,

Putting your model layer in charge of validation

February 27th, 2011 1 comment

This title is directly taken for a section in Steve Sanderson’s book on Asp.Net MVC2 (2nd Edition). I have been struggling with using MVVM with model binding validations built in vs a domain model that binds up to the ModelState. I’m thinking that there is room for there both but using both approaches may be confusing.

When thinking about validation, he suggests that there are three ways in which the first two are build into MVC (p.450):

  • Making demands about the presence or format of data that users may enter into a UI
  • Determining whether a certain .NET object is in a state that you consider valid
  • Applying business rules to allow or prevent certain operations being carried out against your domain model

And I’m like a lot of people I worry about and begin with the third item first. That’s why I build out my domain first and then worry about the UI (that’s not to say that I don’t build out my domain while thinking about the UI!).

So, model validations in the model binding are really easy. There’s no wiring needed, they really do just work. For scaffolding, they’re great. They feel just like my first days on Rails. But to be honest, I suspect that as I craft my views and particularly head towards a fat-client (aka rest-based, jQuery-based client) that all this binding won’t be needed. But my domain still needs to be rock solid. Something still felt wrong from my experience. Luckily, Sanderson gets to it and the end of the chapter. On p.472 he explains that we need to put the model layer in charge because all this binding is not an ideal separation of concerns and leads to practical problems of repetition, obscurity, restriction of technology choices (phew, can I use Castle?) and my favourite, an “unnatural chasm between validation rules and business rules”. He reckons that dropping in a [Required] on a model is convenient for the UI but in practice there are business rules in behind that get more complex.

Of course, the solution isn’t hard and it was just as I have already done. It requires:

  • a list of errors (which usually involve rule information where (properties) and what (the values) that were a problem)
  • often wrapping those in an exception
  • a handler that binds those to the ModelState (because it is the mvc specific part)

Thanks Steve. I was wondering how I was going to explain this to the group I am about to teach next week. Here’s some code that I use (and I think some of this was from a colleague Mark originally).

Here’s our errors and summary of errors:

  public struct Error
  {
      public string Key { get; set; }
      public string Message { get; set; }

      public Error(string key, string message) : this()
      {
          Key = key;
          Message = message;
      }
  }
  public class ErrorSummary
  {
      public static ErrorSummary Empty
      {
          get { return new ErrorSummary(new List<Error>()); }
      }

      public List<Error> Errors { get; private set; }

      public ErrorSummary(List<Error> errors)
      {
          Errors = errors;
      }

      public ErrorSummary(Error error) :
          this(new List<Error> {error })
      {
      }
      public ErrorSummary(string key, string message) :
          this( new Error(key, message))
      {
      }

      public ErrorSummary AddError(string key, string message)
      {
          Errors.Add(new Error(key, message));
          return this;
      }
  }

Then we’ll wrap them in an exception to caught in the mvc framework:

  public class BusinessRuleException : Exception
   {
       public BusinessRuleException(ErrorSummary summary)
       {
           Summary = summary;
       }

       public BusinessRuleException(string key, string message)
       {
           Summary = new ErrorSummary(key, message);
       }

       public BusinessRuleException(string message)
       {
           Summary = new ErrorSummary("", message);
       }

       public virtual ErrorSummary Summary { get; private set; }
   }

Now we need some code that with gather the errors from our validation framework (eg Castle in this case). We’ll construct them as extension methods for fluent syntax.

  /// <summary>
  /// Validation extensions to wrap around PI models using the Castle Validations
  /// </summary>
  public static class ValidationExtension
  {
      private static readonly CachedValidationRegistry Registry = new CachedValidationRegistry();

      public static bool IsValid<T>(this T model) where T : class 
      {
          return new ValidatorRunner(Registry).IsValid(model);
      }

      public static ErrorSummary GetErrors<T>(this T model) where T : class
      {
          var runner = new ValidatorRunner(Registry);
          return !runner.IsValid(model) ? ToErrorSummary(runner, model) : ErrorSummary.Empty;
      }

      public static void ThrowValidationSummary<T>(this T model) where T : class
      {
          var runner = new ValidatorRunner(Registry);
          if (!runner.IsValid(model))
          {
              throw new BusinessRuleException(ToErrorSummary(runner, model));
          }
      }

      private static ErrorSummary ToErrorSummary<T>(IValidatorRunner runner, T model)
      {
          return new ErrorSummary(ToErrors(runner, model));
      }

      private static List<Error> ToErrors<T>(IValidatorRunner runner, T model)
      {
          var errors = new List<Error>();
          var errorSummary = runner.GetErrorSummary(model);
          for (var i = 0; i < errorSummary.ErrorsCount; i++)
          {
              errors.Add(new Error(errorSummary.InvalidProperties[i], errorSummary.ErrorMessages[i]));
          }
          return errors;
      }
  }

Now for MVC controller code. Here we check that a model is valid and throw summary if it isn’t valid. We catch the exception and then bind it in the ModelState.

  public ActionResult SomeAction(Model model)
   {    
        try  // very naive implementation
        {
           model.ThrowValidationSummary();
        }
        catch (Exception ex)
        {
            if (ErrorHandling != null)
                ErrorHandling(ex, ModelState);
            else
                throw;
        }

       // rest of implementation
   }

   protected Action<Exception, ModelStateDictionary> ErrorHandling = (ex, m) =>
   {
       if (typeof(BusinessRuleException) == ex.GetType())
       {
           ((BusinessRuleException)ex).Summary.Errors.ForEach(x => m.AddModelError(x.Key, x.Message));
       }
       else if (typeof(ArgumentException) == ex.GetType())
       {
           m.AddModelError(((ArgumentException)ex).ParamName, ex.Message);
       }
       else if (typeof(UnauthorizedAccessException) == ex.GetType())
       {
           m.AddModelError("Authorization", "Authorisation Denied");
       }
       else if (typeof(HttpRequestValidationException) == ex.GetType())
       {
           m.AddModelError("Validation", "Error");
       }
   };

In practice, this code would be split up. The model checking lives in the service/repository doing the work. I would have the exception handling centralised (eg base class). The list of exception handlers really should be registered a little more nicely than this too.

Go and have a read of Sanderson. You’ll see a similar implementation. A final point by him. This doesn’t mean the death of client-side validation.

Just because your model layer enforces its own rules doesn’t mean you have to stop using ASP.NET MVC’s built-in validation support. I find it helpful to think of ASP.NET MVC’s validation mechanism as a useful first line of defense that is especially good at generating a client-side validation script with virtually no work. It fits in neatly with the view model pattern (i.e., having simple view-specific models that exist only to transfer data between controllers and views and do not hold business logic): each view model class can use Data Annotations attributes to configure client-side validation.
But still, your domain layer shouldn’t trust your UI layer to enforce business rules. The real enforcement code has to go into the domain using some technique like the one you’ve just seen. (p.476)

So now I’m heading off to look at knockout.js – I’m hoping for the it to help me with client-side validation.

Categories: Uncategorized Tags:

UrlBinder for model properties using Url as a type

February 27th, 2011 1 comment

I had the simple problem that my Url properties wouldn’t bind when return back into an action on a controller (in asp.net mvc 2). Take the model below, I want to have a destination as an Url. Seems fair?

  public class Banner
  {
      public string Name { get; set; }
      public Url Destination { get; set; }
  }

Looking at the controller code below, when I enter the action where I had done a form post to I just couldn’t see a value in the banner.Destination but I could see one in banner.Name.

  public ActionResult Create(Banner banner)
   {
       // banner had no Destination value but did have Name  
       // ... rest of code
   }

Wow, I thought. No Url. HHhhhhmmmm. But I really want to be strongly typed – all my unit and integration tests worked and I felt that I wanted my domain model to remain, well, true. Given how long it took me to research the solution one suspects I should have just made the Url and string. But I didn’t and I’m glad. The solutions is simple and I understand the pipeline much better.

I need to implement IModelBinder for the model Url (which is used by the property). While Url isn’t a custom class for our domain it is treated as one by mvc. Here is my UrlModelBinder implementation and hooking it up.

  public class UrlBinder : IModelBinder
  {
      public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
      {
          if (!String.IsNullOrEmpty(bindingContext.ModelName))
          {
              var name = bindingContext.ModelName;
              var rawvalue = bindingContext.ValueProvider.GetValue(name).AttemptedValue;
              var value = new ValueProviderResult(rawvalue, rawvalue, CultureInfo.CurrentCulture);
              bindingContext.ModelState.SetModelValue(name, value);
              return new Url(rawvalue);  // I'm unclear that this needs to be returned
          }
          return null;
      }
  }

And then ensuring that it resolves against the Url.

  protected override void  Application_Start()
  { 
      ModelBinders.Binders.Add(typeof(Url), new UrlBinder());
  }

So, the key to this solution is understanding – and this is quite obvious now that I understand the implementation in DefaultModelBinder – is that Url is a model and not a property. I kept thinking that I needed bind the form value to the property. Well I do but for a “custom” type. Url is custom because (as the documentation clearly states) it is does not inherit from string. Only string properties are bind by default – that includes enumerations of string (eg IList). So once you know that, it is straightforward that the solution is to implement your own IModelBinder for Url. (If you are interested use Reflector or the source on CodePlex and look through the DefaultModelBinder – you’ll see the way they implement BindSimpleModel and BindComplexModel)

This binder is pretty naive so I expect as I learn more it will be improved. Something I didn’t expect was that I don’t really need to construct an Url because in the model binding at this stage I just need be able to have a binder for the type rather than worry about constructing the type. I do this by putting rawvalue into the ValueProviderResult – it only accepts strings. This code then gets picked up again in the DefaultModelBinder when it constructs the model (in this case Banner) which it then easily creates a new Url(value).

A simple question, why isn’t there a binder for Url already in the code? Is not Url effectively primitive for the web? Clearly not.

If you’ve got this far and are interested there are other ways that you really don’t want to use to solve this problem:

  • use an action filter ([Bind(exclude="Destination")]) and then read it from the Request.Form
  • implement my own IValueProvider on the property banner
  • rewrite the ModelState explicitly clearing errors first
  • implement an IModelBinder for the Banner class
  • and the biggest and most heavy-handed of all, subclass DefaultModelBinder

Luckily, none of these solutions are needed. None will feel right because they are either too specific to the class or property, too kludgy because they use Request.Form and just way too big for a small problem.

Categories: Uncategorized Tags:

Software Craftsmanship and TDD

January 17th, 2011 No comments

I gave a talk on software craftmanship back about a year ago. At that time, I spoke to handful of slides and woefully overtime. Since then I extended the slides to just under 40 that may or may not be readable. But I thought that it might be time to put these up if anyone is interested.

Below is the abstract for the slides as a pdf

Software craftsmanship is a movement about getting better at software development particularly through better coding skills. This talk will look at some key discussions over the last ten years with a particular focus on Sennett’s ideas from The Craftsman and ask: what does it mean to become a craftsman or craftswoman? how do we get better? I also look at why as craftspeople we might be troubled and when we may need to be vigilant! I will try outline how this is relevant to practices like continuous integration and test-driven development.

Categories: Uncategorized Tags: