Archive

Archive for November, 2008

Fakes Arent Object Mothers Or Test Builders

November 20th, 2008 No comments

Fakes are not Object Mothers or Test Data Builders and Extensions are a bridge to aid BDD through StoryQ

I’ve just spent the week at Agile2008 and have had it pointed out to me that I have been using Fakes as nomenclature when in fact it is an Object Mother pattern. So I have set off to learn about the Object Mother pattern and in doing so came across the Test Data Builder pattern. But because I am currently working in C# 3.5 and using PI/POCOs/DDD, the builder pattern at looking a little heavy and the criticism of Object Mother holds that it ends up as a God object and hence a little too cluttered.

I’ve also found utilising Extensions in C# 3.5 as a good way to keep the OM clean. By adding extensions, it cleans up the code significantly such that BDD through StoryQ became attractive. Extensions have these advantages:

  • Allows you to put a SetUp (Build/Contruct) method on the object itself
  • Makes this/these methods only available within the test project only
  • Keeps data/values separate from setup ie as a cross-cutting concern
  • In tests, specific edge data – or basic tests – aren’t put into object mothers
  • Extensions end up concise and DRY code

Background

I have been using a Fake models as a strategy to unit test (based on Jimmy Nillson’s DDD in C# book). Technically, these are not fakes. According to sites, fakes are a strategy to replace service objects rather than value objects. My use of fakes is rather the “Object Mother” pattern. (Am I the only person six years behind? Probably.) Since knowing about this pattern, I’ve found it on Fowler’s bliki and the original description at XP Universe 2001 (however, I haven’t been able to download the paper as yet – xpuniverse.com isn’t responding).

Having read entries around this pattern, another pattern emerged: the “Test Data Builder” pattern. It is a likable pattern. (In fact, it could be leveraged as an internal DSL - but I haven’t pursued that as yet.) But, given the work I do in C#, it looks a little heavy as it is useful to cope with complex dependencies. In contrast, the fake/object mother I have been using has been really easily to teach, and well liked by, developers.

Basic approach

To test these approaches, I am going to create a simple model: User which has an Address. Both models have validations on the fields and I can ask the model to validate itself. You’ll note that I am using Castle’s validations component. This breaks POCO rules of a single library but in practice is a good trade-off. My basic test heuristic:

  • populate with defaults = valid
  • test each field with an invalid value
  • ensure that inherited objects are also validated

Four Strategies:

  1. Object Mother
  2. Test Data Builder
  3. Extensions with Object Mother
  4. StoryQ with Extensions (and Object Mother)

Strategy One: Object Mother

First, the test loads up a valid user through a static method (or property if you wish). I use the convention “Valid” to always provide back what is always a valid model. This is a good way to demonstrate to new devs what the exemplar object looks like. Interesting, in C# 3.5, the new constructor convention works very well here. You can read down the list of properties easily, often without need to look at the original model code. Moreover, in the original object, there is no need for an overloaded constructor.

[Test]
 public void ValidUser()
 {
     var fake = TestUser.Valid();
     Assert.IsTrue(fake.IsOKToAccept());
 }

  public class TestUser
   {
       public static User Valid()
       {
           return new User {
               Name = "Hone", 
               Email = "Hone@somewhere.com", 
               Password="Hone", 
               Confirm = "Hone"
       }
   }

Oh, and here’s the User model if you are interested. Goto to Castle Validations if this isn’t clear.

 public class User
 {
     [ValidateLength(1, 4)]
     public string Name { get; set; }

     [ValidateEmail]
     public string Email { get; set; }

     [ValidateSameAs("Confirm")]
     [ValidateLength(1,9)]
     public string Password { get; set; }
     public string Confirm { get; set; }

     public bool IsOKToAccept()
     {
         ValidatorRunner runner = new ValidatorRunner(new CachedValidationRegistry());
         return runner.IsValid(this);
     }
  }

Second, the test works through confirming that each validation works. Here we need to work through each of the fields checking for invalid values. With the object mother pattern, each case is a separate static method: email, name and password. The strategy is to always always start with the valid model, make only one change and test. It’s a simple strategy, easy to read and avoids duplication. The problem with this approach is hiding test data; we have to trust that the method, “invalidName”, is actually an invalid name or follow it through to the static method. In practice, it works well enough and avoids duplication.

public static User InvalidEmail()
{
    var fake = Valid();
    fake.Email = "invalid@@some.com";
    return fake;
}

public static User InvalidName()
{
    var fake = Valid();
    fake.Name = "with_more_than_four";
    return fake;
}

public static User InvalidPassword()
{
    var fake = Valid();
    fake.Password = "not_same";
    fake.Confirm = "different";
    return fake;
}

[Test]
public void InValidEmail()
{
    var fake = TestUser.InvalidEmail();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidName()
{
    var fake = TestUser.InvalidName();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidPassword()
{
    var fake = TestUser.InvalidPassword();
    Assert.IsFalse(fake.IsOKToAccept());
}

Strategy Two: Test Data Builder

I’m not going to spend long on this piece of code because the builder looks like too much work. The pattern is well documented so I assume you already understand it. I do think that it could be useful if you want to create an internal DSL to work through dependencies. Put differently, this example is too simple to demonstrate a case for when to use this pattern (IMHO).

First, here’s the test for the valid user. I found that I injected the valid user behind the scenes (with an object mother) so that I could have a fluent interface with Build(). I can overload the constructor to explicitly inject a user too. I have two options when passing in an object: (1) inject my object mother or (2) inject a locally constructed object. The local construction is useful for explicitly seeing what is being tested. But really, it is the syntactic sugar of C# that is giving visibility rather than the pattern; so, for simple cases, the syntax of the language renders the pattern more verbose.

[Test]
public void ValidUser()
{
    var fake = new UserBuilder().Build();
    Assert.IsTrue(fake.IsOKToAccept());

}

[Test]
public void LocalUser()
{
    var fake = new UserBuilder(TestUser.Valid()).Build();
    Assert.IsTrue(fake.IsOKToAccept());

    fake = new UserBuilder(new User
                    {
                        Name = "Hone", 
                        Email = "good@com.com",
                        Password = "password",
                        Confirm = "password",
                        Address = new Address
                                      {
                                          Street = "Fred",
                                          Number = "19"
                                      }
                    })
                    .Build();
    Assert.IsTrue(fake.IsOKToAccept());
}

public class UserBuilder
{
    private readonly User user;

    public UserBuilder()
    {
        user = TestUser.Valid();
    }

    public UserBuilder(User user)
    {
        this.user = user;
    }

    public User Build() { return user; }
}

Second, let’s validate each field. In the code, and on the positive side, it is clear what constitutes an invalid value and it caters for dependencies such as in “withPassword” between password and confirm. But, really, there is just too much typing; I have to create a method for every field and dependency. For simple models, I am not going to do this. For complex or large models, it would take ages.

[Test]
public void InValidEmail()
{
    var fake = new UserBuilder()
        .withEmail("incorect@@@emai.comc.com.com")
        .Build();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidName()
{
    var fake = new UserBuilder()
        .withName("a_name_longer_than_four_characters")
        .Build();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidPassword()
{
    var fake = new UserBuilder()
        .withPassword("bad_password")
        .Build();
    Assert.IsFalse(fake.IsOKToAccept());
}

public class UserBuilder
{
    private readonly User user;

    public UserBuilder()
    {
        user = TestUser.Valid();
    }

    public UserBuilder(User user)
    {
        this.user = user;
    }

    public UserBuilder withEmail(string email)
    {
        user.Email = email;
        return this;
    }

    public UserBuilder withPassword(string password)
    {
        user.Confirm = password;
        user.Password = password;
        return this;
    }

    public UserBuilder withName(string name)
    {
        user.Name = name;
        return this;
    }

    public UserBuilder withAddress(Address address)
    {
        user.Address = address;
        return this;
    }

    public User Build() { return user; }
}

Strategy Three: Object Mother with Extensions

Having tried the two previous patterns, I now turn to what extensions have to offer me. (Extensions are something I have been meaning to try for a while.) As it turns out extensions combined with the new constructors allow for SOC and DRY and also allow us to separate valid and invalid data in tests. There is a downside of course. It requires some (reflection) code to make it play nicely. More code – a slippery slope some might say …

First, I have added a new method to my User model in the form of an extension. I have named it SetUp so that it translates into the setup and teardown (init and dispose) phases of unit testing. I could use Build or Construct instead. This method returns my object mother. I still like to keep my object mother data separate because I think of construction and data as separate concerns.

[Test]
 public void ValidUser()
 {
     var fake = new User().SetUp();
     Assert.IsTrue(fake.IsOKToAccept());
 }

 public static User SetUp(this User user)
 {
     User fake = TestUser.Valid();
     return fake; 
 }

I also want to test how to create a version of the user visible from the test code. This is where more code is required to combine the any provided fields from the test with the default, valid model. In returning hydrated test user, this method accepts your partially hydrated object and then adds defaults. The goal is that your unit test code only provides specifics for the test. The rest of the valid fields are opaque to your test; the code reflects on your model only hydrating empty fields so that validations do not fail. This reflection code is used by all SetUp methods across models.

[Test]
 public void LocalUser()
 {
     var fake = new User { Name = "John", Email = "valid@someone.com" }.SetUp();
     Assert.IsTrue(fake.IsOKToAccept());
 }

 public static User SetUp(this User user)
 {
     User fake = TestUser.Valid();
     SyncPropertiesWithDefaults(user, fake);
     return fake; 
 }

 private static void SyncPropertiesWithDefaults(object obj, object @base)
   {
       foreach (PropertyInfo prop in obj.GetType().GetProperties())
       {
           object[] val = new object[1];
           val[0] = obj.GetType().GetProperty(prop.Name).GetValue(obj, null);
           if (val[0] != null)
           {
               obj.GetType().InvokeMember(prop.Name, BindingFlags.SetProperty, Type.DefaultBinder, @base, val);
           }
       }
   }

Second, let’s again look at the code to validate all the fields. Now, note at this point there no other object mothers. I see this strategy says, use object mothers to model significant data and avoid cluttering these with edge cases. The result is to hand in the field/value to isolate and then test. I find this readable and it addresses the concern of not making visible the edge case (invalid) data.

[Test]
public void InValidEmail()
{
    var fake = new User { Email = "BAD@@someone.com" }.SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidName()
{
    var fake = new User { Name = "too_long_a_name" }.SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

[Test]
public void InValidPassword()
{
    var fake = new User { Password = "password_one", Confirm = "password_two" }.SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

Using extensions combined with constructors are nice. We can also format the page to make it look like a fluent interface if need. For example:

[Test]
public void InValidPassword()
{
    var fake = new User 
                { 
                    Password = "password_one", 
                    Confirm = "password_two" 
                }
                .SetUp();
    Assert.IsFalse(fake.IsOKToAccept());
}

Strategy Four: BDD and StoryQ

Having worked out that using extensions reduce repetitious code, I still think there is a smell here. Are those edge cases going to add value in the current form? They really are bland. Sure, the code is tested. But, really, who wants to read and review those tests? I certainly didn’t use them to develop the model code; they merely assert overtime that my assumptions haven’t changed. Let’s look at how I would have written the same tests using BDD and in particularly StoryQ. Here’s a potential usage of my User model.

Story: Creating and maintaining users

  As an user
  I want to have an account
  So that I can register, login and return to the site

  Scenario 1: Registration Page
    Given I enter my name, address, email and password     
    When username isn't longer than 4 characters           
      And email is validate                                
      And password is correct length and matches confirm   
      And address is valid                                 
    Then I can log now login
      And I am sent a confirmation email

Leaving aside any problems in the workflow (which there are), the story works with the model in context. Here is the code that generates this story:

[Test]
public void Users()
{
    Story story = new Story("Creating and maintaining users");

    story.AsA("user")
        .IWant("to have an account")
        .SoThat("I can register, login and return to the site");

    story.WithScenario("Registration Page")
        .Given(Narrative.Exec("I enter my name, address, email and password", 
                 () => Assert.IsTrue(new User().SetUp().IsOKToAccept())))
        
        .When(Narrative.Exec("username isn't longer than 4 characters", 
                 () => Assert.IsFalse(new User{Name = "Too_long"}.SetUp().IsOKToAccept())))

        .And(Narrative.Exec("email is validate", 
                 () => Assert.IsFalse(new User { Email = "bad_email" }.SetUp().IsOKToAccept())))

        .And(Narrative.Exec("password is correct length and matches confirm", 
                 () => Assert.IsFalse(new User { Password = "one_version", Confirm = "differnt"}.SetUp().IsOKToAccept())))

        .And(Narrative.Exec("address is valid", 
                  () => Assert.IsFalse(new User { Address = new Address{Street = null}}.SetUp().IsOKToAccept())))
        
        .Then(Narrative.Text("I can log now login"))
        .And(Narrative.Text("I am sent a confirmation email"));

    story.Assert();
}

Working with BDD around domain models and validations makes sense to me. I think that it is a good way to report validations back up to the client and the team. There is also a good delineation between the “valid” model (“Given” section) and “invalid” (“When” section) edge cases in the structure. In this case, it also demonstrates that those models so far do nothing because there are no tests in the “Then” section.

Some quick conclusions

  1. Syntactic sugar of new constructor methods in C# 3.5 avoids the need for “with” methods found the Test Data Builder pattern
  2. Extensions may better replace the Build method in the builder
  3. Object mothers are still helpful to retain separation of concerns (eg data from construction)
  4. Go BDD ;-)

Well that’s about it for now. Here’s a download of the full code sample in VS2008.

Postscript: About Naming Conventions

How do I go about naming my object mother? I have lots of options: Test, Fake, ObjectMother, OM, Dummy. Of course, if I was purist it would be ObjectMother. But that doesn’t sit well with me when explaining it others. Although it does make explicit the pattern being used, I find fake most useful (eg FakeUser). It rolls off the tongue, it is self evident enough to outsiders as a strategy. Test (eg TestUser), on the other hand, is generic enough that I have to remind myself of its purpose and then I find I always need to do quick translations from the tests themselves. For example, with UserTest and TestUser I have read them to check which I am working with. For these samples, I have used TestUser. If you come back to my production code you will find FakeUser. I hope that doesn’t prove problematic in the long run as it isn’t really a fake.

RSync quick reference

November 19th, 2008 1 comment

I always forget what rsync commands I actually need to use.

Backups (one-way)

Quick Backup from one folder to another

rsync -a directory1/ directory2/

Notes: remember the forward slashes on each directory. This does a simple one-way archive

Quick Backup from one folder to another ensuring DELETIONS

This ensures that the dest directory has extra files DELETED

rsync -a --delete directory1/ directory2/

Notes: remember the forward slashes on each directory. This does a simple one-way archive

Dry Run

rsync -a -n -v directory1/ directory2/
  • -n = –dry-run
  • -v = verbose (you need both)

Synch (two-way)

h3. Sync between two folders

rsync -a -u directory1/ directory2/
rysnc -a -u directory2/ directory1/

This one moves all the files from the first to the second and then does the reverse. I can see the problem with this is that if there are files that you deleted in the first (that are there in the second) that are really no longer needed then you will end up with them again. At that point you probably want to think about another system. Eg git or bazaar!

Archive folders to a server

Use the ruby script to do the job: http://github.com/RichGuk/rrsync/tree/master. Although I have now rewritten this script see the new version here: http://github.com/toddb/rrsync/tree/master

#!/usr/bin/ruby
require 'rubygems'
require 'Logger'
require 'benchmark'
require 'ping'
require 'FileUtils'
require 'open3'

#============================= OPTIONS ==============================#
# == Options for local machine.
SSH_APP       = 'ssh'
RSYNC_APP     = 'rsync'

EXCLUDE_FILE  = '/path/to/.rsyncignore'
DIR_TO_BACKUP = '/test'
LOG_FILE      = '/var/log/rrsync.log'
LOG_AGE       = 'daily'

EMPTY_DIR     = '/tmp/empty_rsync_dir/' #NEEDS TRAILING SLASH.
# == Options for the remote machine.
SSH_USER      = 'user'
SSH_SERVER    = 'x.dreamhost.com'
SSH_PORT      = '' #Leave blank for default (port 22).
BACKUP_ROOT   = '/home/.machine/user/backup'
BACKUP_DIR    = BACKUP_ROOT + '/' + Time.now.strftime('%A').downcase
RSYNC_VERBOSE = '-v'
RSYNC_OPTS    = "--force --ignore-errors --delete-excluded --exclude-from=#{EXCLUDE_FILE} --delete --backup --backup-dir=#{BACKUP_DIR} -a"
# == Options to control output
DEBUG         = true #If true output to screen else output is sent to log file.
SILENT        = false #Total silent = no log or screen output.
#========================== END OF OPTIONS ==========================#

if DEBUG && !SILENT
  logger = Logger.new(STDOUT, LOG_AGE)
elsif LOG_FILE != '' && !SILENT
  logger = Logger.new(LOG_FILE, LOG_AGE)
else
  logger = Logger.new(nil)
end
ssh_port = SSH_PORT.empty? ? '' : "-e 'ssh -p #{SSH_PORT}'"
rsync_cleanout_cmd = "#{RSYNC_APP} #{RSYNC_VERBOSE} #{ssh_port} --delete -a #{EMPTY_DIR} #{SSH_USER}@#{SSH_SERVER}:#{BACKUP_DIR}"
rsync_cmd = "#{RSYNC_APP} #{RSYNC_VERBOSE} #{ssh_port} #{RSYNC_OPTS} #{DIR_TO_BACKUP} #{SSH_USER}@#{SSH_SERVER}:#{BACKUP_ROOT}/current"

logger.info("Started running at: #{Time.now}")
run_time = Benchmark.realtime do
  begin
    raise Exception, "Unable to find remote host (#{SSH_SERVER})" unless Ping.pingecho(SSH_SERVER)
       
    FileUtils.mkdir_p("#{EMPTY_DIR}")
    Open3::popen3("#{rsync_cleanout_cmd}") { |stdin, stdout, stderr|
      tmp_stdout = stdout.read.strip
      tmp_stderr = stderr.read.strip
      logger.info("#{rsync_cleanout_cmd}\n#{tmp_stdout}") unless tmp_stdout.empty?
      logger.error("#{rsync_cleanout_cmd}\n#{tmp_stderr}") unless tmp_stderr.empty?
    }
    Open3::popen3("#{rsync_cmd}") { |stdin, stdout, stderr|
      tmp_stdout = stdout.read.strip
      tmp_stderr = stderr.read.strip
      logger.info("#{rsync_cmd}\n#{tmp_stdout}") unless tmp_stdout.empty?
      logger.error("#{rsync_cmd}\n#{tmp_stderr}") unless tmp_stderr.empty?
    }
    FileUtils.rmdir("#{EMPTY_DIR}")
  rescue Errno::EACCES, Errno::ENOENT, Errno::ENOTEMPTY, Exception => e
    logger.fatal(e.to_s)
  end
end
logger.info("Finished running at: #{Time.now} - Execution time: #{run_time.to_s[0, 5]}")

Categories: Uncategorized Tags:

Update to Hemingway theme for textile

November 17th, 2008 No comments

Now that I am uploading my blog directly, I actually needed textile installed. See Textile plugin for WordPress. However, the code didn’t look that good with my Hemingway skin. Here is the snippet from the css to make the code look better.

pre{
  width:95%;
  padding:1em 0;
  overflow:auto;
  color: #444444;
  font-family:'Bitstream Vera Sans Mono','Courier',monospace;
  font-size:105%;
  background-color:#F8F8F8;
  margin-top:0.3em;
  padding:1.2em 0.7em 1.7em 0.7em;
  border:1px solid #E9E9E9;
}

pre > p{
  width:95%;
  padding:1em 0;
  overflow:auto;
  color: #444444;
  font-family:'Bitstream Vera Sans Mono','Courier',monospace;
  font-size:105%;
  background-color:#F8F8F8;
  margin-top: -1.5em;
  margin-bottom: -1.5em;
  line-height: 1.2em;
}

Also I had to patch the code to get lists to work correctly. I went back to the original SVN code (however, I could not take the entire file so I just took the list code). Here it is (sorry it’s not a diff file):

// -------------------------------------------------------------
function lists($text)
{
  return preg_replace_callback("/^([#*]+$this->c .*)$(?![^#*])/smU", array(&$this, "fList"), $text);
}

// -------------------------------------------------------------
function fList($m)
{
  $text = preg_split('/\n(?=[*#])/m', $m[0]);
  foreach($text as $nr => $line) {
    $nextline = isset($text[$nr+1]) ? $text[$nr+1] : false;
    if (preg_match("/^([#*]+)($this->a$this->c) (.*)$/s", $line, $m)) {
      list(, $tl, $atts, $content) = $m;
      $nl = '';
      if (preg_match("/^([#*]+)\s.*/", $nextline, $nm))
        $nl = $nm[1];
      if (!isset($lists[$tl])) {
        $lists[$tl] = true;
        $atts = $this->pba($atts);
        $line = "\t<" . $this->lT($tl) . "l$atts>\n\t\t<li>" . rtrim($content);
      } else {
        $line = "\t\t<li>" . rtrim($content);
      }

      if(strlen($nl) <= strlen($tl)) $line .= "</li>";
      foreach(array_reverse($lists) as $k => $v) {
        if(strlen($k) > strlen($nl)) {
          $line .= "\n\t</" . $this->lT($k) . "l>";
          if(strlen($k) > 1)
            $line .= "</li>";
          unset($lists[$k]);
        }
      }
    }
    else {
      $line .= "\n";
    }
    $out[] = $line;
  }
  return $this->doTagBr('li', join("\n", $out));
}

Jobs still to be done on this code. I don’t think it deals with the <pre> tag correctly. It doesn’t escape HTML code and it inserts <p> and <br/> tags. I think that preformattted text should just be left alone! It maybe in the latest release that I have patched across. That’s another days work.

Categories: Uncategorized Tags: , , ,

Update to Hemingway theme for textile

November 17th, 2008 No comments

Now that I am uploading my blog directly, I actually needed textile installed. See Textile plugin for WordPress. However, the code didn’t look that good with my Hemingway skin. Here is the snippet from the css to make the code look better.

pre{
  width:95%;
  padding:1em 0;
  overflow:auto;
  color: #444444;
  font-family:'Bitstream Vera Sans Mono','Courier',monospace;
  font-size:105%;
  background-color:#F8F8F8;
  margin-top:0.3em;
  padding:1.2em 0.7em 1.7em 0.7em;
  border:1px solid #E9E9E9;
}

pre > p{
  width:95%;
  padding:1em 0;
  overflow:auto;
  color: #444444;
  font-family:'Bitstream Vera Sans Mono','Courier',monospace;
  font-size:105%;
  background-color:#F8F8F8;
  margin-top: -1.5em;
  margin-bottom: -1.5em;
  line-height: 1.2em;
}

Also I had to patch the code to get lists to work correctly. I went back to the original SVN code (however, I could not take the entire file so I just took the list code). Here it is (sorry it’s not a diff file):

// -------------------------------------------------------------
function lists($text)
{
  return preg_replace_callback("/^([#*]+$this->c .*)$(?![^#*])/smU", array(&$this, "fList"), $text);
}

// -------------------------------------------------------------
function fList($m)
{
  $text = preg_split('/\n(?=[*#])/m', $m[0]);
  foreach($text as $nr => $line) {
    $nextline = isset($text[$nr+1]) ? $text[$nr+1] : false;
    if (preg_match("/^([#*]+)($this->a$this->c) (.*)$/s", $line, $m)) {
      list(, $tl, $atts, $content) = $m;
      $nl = '';
      if (preg_match("/^([#*]+)\s.*/", $nextline, $nm))
        $nl = $nm[1];
      if (!isset($lists[$tl])) {
        $lists[$tl] = true;
        $atts = $this->pba($atts);
        $line = "\t<" . $this->lT($tl) . "l$atts>\n\t\t<li>" . rtrim($content);
      } else {
        $line = "\t\t<li>" . rtrim($content);
      }

      if(strlen($nl) <= strlen($tl)) $line .= "</li>";
      foreach(array_reverse($lists) as $k => $v) {
        if(strlen($k) > strlen($nl)) {
          $line .= "\n\t</" . $this->lT($k) . "l>";
          if(strlen($k) > 1)
            $line .= "</li>";
          unset($lists[$k]);
        }
      }
    }
    else {
      $line .= "\n";
    }
    $out[] = $line;
  }
  return $this->doTagBr('li', join("\n", $out));
}

Jobs still to be done on this code. I don’t think it deals with the <pre> tag correctly. It doesn’t escape HTML code and it inserts <p> and <br/> tags. I think that preformattted text should just be left alone! It maybe in the latest release that I have patched across. That’s another days work.

Categories: Uncategorized Tags: , , ,

jQuery and testing — JSUnit, QUnit, JsSpec [Part 2]

November 15th, 2008 No comments

Trying QUnit with jQuery

In the JSUnit entry, one of the main problems was with the sequencing of calls. Let’s see how QUnit handles this. QUnit has a simple implementation to this problem: stop() and start() commands to synchronise sequences. The basic approach is that it calls a test function. With the stop() command, it will not call the next test until you say start(). But it does complete the rest of the current test.

The code that I ended up with was just what I wanted compared to JsUnit. Previously, I had said that basically I wanted to call my function and then process the result that it was in fact what I wanted. Here is the main javascript code (it’s nice and concise).

One point about the code at this stage. The tests had to be inside the success function to be run. I wonder if this is going to create a code smell in the long run. Plus there is no setup/teardown cycles, again I wonder what that will mean in the long run. Perhaps not?

module("XML to object");

test("Check the xml to object conversion without showing the tree", function() {
  
  expect( 5 )
  stop();
  
   $().storyq({
        url: 'data/results-01.xml', 
        load: '',
        success: function(feed) {
          ok( feed, "is an object" )
          ok( !$.isFunction(feed), "is not a function" )
          ok( feed.version, "has a version: " + feed.version )
          ok( feed.items, "has items")
          same( feed, reference, "is the same as the reference object in data/results-01.js")
          start();
        }
    });

});

Honestly, it is that easy.

Here’s a couple of features that took me half an hour to work out restated from above. (1) stop() and start() almost work as you expect – but I had to put some alerts in to check the order of execution. Basically, a stop() halts any new tests from executing but it keeps the current test executing. The effect of this is that the asynchronise call can be completed. start() then tells the tests to start running again. If you don’t have a start() then you will find that your test runner halts altogether. There is another option and that is to put a timer on the stop and then you don’t need a start. I prefer to keep the tests running as quickly as possible.

Just another note. I decided to do a same() comparison. I have saved and preloaded an object from a file for ease. This kept the test easy to read – my reference object here is quite long. You can see the insertion of this file in the entire code below <script type="text/javascript" src="data/results-01.js"/>

&lt;html>
&lt;head>
  &lt;title>Unit tests for StoryQ Results viewer&lt;/title>
  &lt;link rel="stylesheet" href="../../lib/qunit/testsuite.css" type="text/css" media="screen" />

  &lt;link rel="stylesheet" href="../../lib/treeview/jquery.treeview.css" />
  &lt;link rel="stylesheet" href="../../src/css/storyq.treeview.css" />
  &lt;link rel="stylesheet" href="../../src/css/storyq.screen.css" />

  &lt;script src="../../lib/jquery/jquery.js">&lt;/script>
  &lt;script src="../../lib/jquery/jquery.cookie.js" type="text/javascript">&lt;/script>
  &lt;script src="../../lib/treeview/jquery.treeview.js" type="text/javascript">&lt;/script>
  &lt;script src="../../src/storyq.js" type="text/javascript">&lt;/script>
  &lt;script src="../../src/storyq.treeview.js" type="text/javascript">&lt;/script>
  &lt;script src="../../src/storyq.xml.js" type="text/javascript">&lt;/script>
  &lt;script src="../../src/storyqitem.js" type="text/javascript">&lt;/script>
  &lt;script src="../../src/storyqresults.js" type="text/javascript">&lt;/script> 

  &lt;script type="text/javascript" src="../../lib/qunit/testrunner.js">&lt;/script> 
  &lt;script type="text/javascript" src="data/results-01.js">&lt;/script>  
  
  &lt;script type="text/javascript">
    module("XML");

    test("Check the xml to object conversion without showing the tree", function() {

      expect( 5 )
      stop();

       $().storyq({
            url: 'data/results-01.xml', 
            load: '',
            success: function(feed) {
              ok( feed, "is an object" )
              ok( !$.isFunction(feed), "is not a function" )
              ok( feed.version, "has a version: " + feed.version )
              ok( feed.items, "has items")
              same( feed, reference, "is the same as the refefence object in data/results-01.js")
              start();
            }
        });

    });
  &lt;/script>
  
&lt;/head>
&lt;body>
  
 &lt;h1>QUnit tests&lt;/h1>
 &lt;h2 id="banner">&lt;/h2>
 &lt;h2 id="userAgent">&lt;/h2>
 &lt;ol id="tests">&lt;/ol>

 &lt;div id="main">&lt;/div>

&lt;/div>
&lt;/div>

&lt;/body>
&lt;/html>

The output results from QUnit are nice to look at and easy to read. I didn’t have a couple of errors that weren’t the easiest to debug given the output. Partly though because I was new to it, I was taking too big a step at times!

I’m happy with QUnit – and there are plenty of examples in the JQuery test suite. I can see that I would do TDD with this.

Being a BDD type of guy, I’m now off to see what JsSpec has to offer.

jQuery and testing – JSUnit, QUnit, JsSpec [Part 1]

November 15th, 2008 No comments

Trying JsUnit with JQuery

I have started first with JSUnit because it is tried and true (and to tell the truth I thought it would be fine and didn’t bother with a quick search for alternatives).

For the impatient, I won’t be going with JSUnit and here are some reasons:

  • the problem is that the integration of a setup (ie onload – pausing) to load the data doesn’t integrate well with jQuery. JSUnit has its own setup and document loader but I am still wondering how to do this transparently (ie I didn’t actually get the test to work – I wasn’t exhaustive but then again I don’t think that I should have needed to be to get this test going)
  • Firefox 3.0 on mac doesn’t integrate well (ie it doesn’t work), but safari does! Unfortunately, I am a little bound to firebug for development.
  • JSUnit doesn’t report tests well either

I went and had a look at how JSUnit does it. (Remember that this framework has been around a lot longer than jQuery.) Here is the extract from the code/test samples. The basic setup is to hook into an existing testManager that existing within a frame and then get the data from there. Furthermore, you need to manage your own flag that the process has been complete. JSUnit then looks through all functions that start with test in this case testDocumentGetElementsByTagName checks expected data. Here I assume that the tests are run in a particular frame (buffer()) that testManager gives us access to.

var uri = 'tests/data/data.html';

function setUpPage() {
    setUpPageStatus = 'running';
    top.testManager.documentLoader.callback = setUpPageComplete;
    top.testManager.documentLoader.load(uri);
}

function setUpPageComplete() {
    if (setUpPageStatus == 'running')
        setUpPageStatus = 'complete';
}

function testDocumentGetElementsByTagName() {
    assertEquals(setUpPageStatus, 'complete');
    var buffer = top.testManager.documentLoader.buffer();
    var elms = buffer.document.getElementsByTagName('P');
    assert('getElementsByTagName("P") returned is null', elms != null);
    assert('getElementsByTagName("P") is empty', elms.length > 0);
}

Below is the rewritten code to exercise my code. Here’s a couple of the features:

  • for setup, pass in the correct xml file via uri variable (obviously)
  • to test, I have written a test testXML2Object.

    p. There is one major design problem with the code itself that didn’t allow me to use my own data loader. You will see the line var feed = new StoryQResults(buffer);. Where did that come from? It is nothing close to the code I said that I wanted to exercise. It is infact from within the code I want to exercise. The major issue I found here is that to load and test data I had to use the testManager rather than use my own ().storyq() call.

The other problem was it wouldn’t return the result that I wanted either. I was expecting my feed variable to be an object of the results. Instead I was getting a reference to function StoryQResults – given now that it wasn’t running in Firefox and I didn’t have Firebug life was getting a little hard.

var uri = '../../xml/results-01.xml';

function setUpPage() {
    setUpPageStatus = 'running';
    top.testManager.documentLoader.callback = setUpPageComplete;
    top.testManager.documentLoader.load(uri);
}

function setUpPageComplete() {
    if (setUpPageStatus == 'running')
        setUpPageStatus = 'complete';
}

function testXML2Object() {
    assertEquals(setUpPageStatus, 'complete');
    var buffer = top.testManager.documentLoader.buffer();
    
    var feed = new StoryQResults(buffer);               

    assertEquals(feed.version, '0.1')
    assertEquals("Number of stories", $(feed).size(), 1)
    $.each(feed, function(){
      alert(this)               
    })

}

Even though I know that I getting a function returned instead of an object I am still going to see if I can invoke my own loading function within JSUnit. Here’s what the code would look like below. I wouldn’t recommend running it – just take a look at it. The code to me is a mixtures of styles that start to bloat the code. On the one hand, JSUnit has this setup phase with explicit flags and no anonymous functions. On the other hand, because I am using JQuery conventions, I encapsulate alot of that logic. For example, jQuery(function(){}) waits for the page to be loaded before executing ("#tree").story(), Then I have the callback function inline. It looks good from the outside, but it doesn’t work.

The order of calls is: loading, in test and then loaded. Indicating that my JQuery function runs after the test has been run. The order should have been loading, loaded and then in test. In this sense, while setUpPage runs within its own setup/test/teardown cycle. But my JQuery call isn’t linked into this. JQuery waits is waiting on a document flag rather than a custom flag (within testManager). At this point, I do not wish to dig into these libraries to work them out to get it to all play nicely. It wasn’t designed to work this way. Let’s find one that was.

var data = '';

function setUpPage() {
    setUpPageStatus = 'running';
    alert('loading')
    jQuery(function() {
      $("#tree").storyq({
          url: '../../xml/results-01.xml', 
          success: null, 
          load: function(feed) {
            data = feed;
            alert('loaded')
            setUpPageComplete()
            }
          });
    });
}

function setUpPageComplete() {
    if (setUpPageStatus == 'running')
        setUpPageStatus = 'complete';
}

function testXML2Object() {
    alert('in test')
    assertEquals(setUpPageStatus, 'complete');

    assertEquals(data.version, '0.1')
    assertEquals("Number of stories", $(feed).size(), 1)
}

I’m invoking the two-feet principle: I’m moving onto the next framework (after a quick search): QUnit

jQuery and testing – JSUnit, QUnit, JsSpec [Part 3]

November 14th, 2008 No comments

Trying JsSpec with jQuery

This is part three of three. The previous two have been focussed around managing the problem of timing: JSUnit got too hard and QUnit is easy but you still have to manage timings yourself. With JsSpec there is no problem because it is all managed for you. Nice work! Here’s the code I had to write.

A couple of things to writing it. I had to dig into the code to find out the setup/teardown lifecycle keywords. There turns out to be setup/teardown per test (eg before) and per test suite (eg before all). I also had to dig around to find then comparators (eg should_be, should_not_be_null). I couldn’t find any documentation.

describe('I need to read the xml and convert into object', {
  'before all': function() {
    target = {};
    $().storyq({
        url: 'data/results-01.xml', 
        load: '',
        success: function(feed) {
          target = feed
      }
    })
   
  },
  
  'should return an object': function() {
    value_of(target).should_not_be_null()
  },
  
  'should not be a function': function() {
    value_of(typeof target).should_not_be(typeof Function )
  },
  
  'should have a version': function(){
    value_of(target.version).should_be('0.1')
  },
  
  'should have items': function(){
    value_of(target.items).should_not_be_empty()
  },
  
  'should have the same value as the reference object in data/results-01.js': function(){
    value_of(reference).should_not_be_undefined()
    value_of(target).should_be(reference)
  },
  
})

Ihe output looks nice too ;-) Here’s the overall code. Notice that I have also used the technique of a reference object in results-01.js:

&lt;html>
&lt;head>
&lt;title>JSSpec results&lt;/title>
&lt;link rel="stylesheet" type="text/css" href="../lib/jsspec/JSSpec.css" />
&lt;script type="text/javascript" src="../lib/jsspec/JSSpec.js"/>

&lt;script src="../lib/jquery/jquery.js"/>
&lt;script src="../lib/jquery/jquery.cookie.js" type="text/javascript"/>
&lt;script src="../lib/treeview/jquery.treeview.js" type="text/javascript"/>
&lt;script src="../build/dist/jquery.storyq.js" type="text/javascript"/>

&lt;script type="text/javascript" src="data/results-01.js"/>
&lt;script type="text/javascript" src="specs/treeview.js"/>  

&lt;script type="text/javascript">

  describe('I need to read the xml and convert into object', {
    'before all': function() {
      target = {};
      $().storyq({
          url: 'data/results-01.xml', 
          load: '',
          success: function(feed) {
            target = feed
        }
      })

    },

    'should return an object': function() {
      value_of(target).should_not_be_null()
    },

    'should not be a function': function() {
      value_of(typeof target).should_not_be(typeof Function )
    },

    'should have a version': function(){
      value_of(target.version).should_be('0.1')
    },

    'should have items': function(){
      value_of(target.items).should_not_be_empty()
    },

    'should have the same value as the reference object in data/results-01.js': function(){
      value_of(reference).should_not_be_undefined()
      value_of(target).should_be(reference)
    },

  })
&lt;/script>

&lt;/head>
    &lt;body>
      &lt;div style="display:none;">&lt;p>A&lt;/p>&lt;p>B&lt;/p>&lt;/div>
    &lt;/body>
&lt;/html>

JSSpec isn’t written using JQuery. So there are a couple of issues that I can’t pin down. When I get errors it stops the tests completely. I suspect that this is because these tests are using callbacks and they don’t return an (JQuery) object. JQuery does alot of object chaining and JsSpec isn’t cut out for it (I think).

Well, that’s it.

jQuery and testing – JSUnit, QUnit, JsSpec [Introduction]

November 12th, 2008 No comments

I had been writing a jQuery parser and then realised once I had spiked it that I hadn’t actually written any tests. So, these are some results from a spike in unit testing a jQuery plugin.

Some background, the plugin is a results viewer from an xml feed from storyq. So, I have run some tests and have results. Now I want to see them in html format. The plugin merely transforms the xml to be displayed using the treeview plugin. I wanted to avoid handing in a json feed formatted specifically for the treeview. I wanted all this to happen client side.

The tests have two aspects:

  • xml loading and parsing into an object
  • rendering the object into a tree (at that point treeview takes over)

    In short, I want to test the code underlying this call that returns the feed before populating an <ul id="tree"> element:
$('#tree').storyq({
    url: 'tests/data/results-01.xml',   
    success: function(feed) {

      $("#tree").treeview(); //populate the tree
  
  }    
});

Problem for any framework: sequencing

Let’s take a look at what I want as test code. In this code, I want to populate data with the feed variable returned in the success callback. The test can then check for values. Take a look at the code below. When I run the code, I should (ideally) see the sequence of alerts: loaded, start test, end test. Of course, I don’t. I see start start test, end test, loaded as the sequence. That should be obvious that the callback success hasn’t been called as yet: javascript is run sequentially. Okay, nothing here is surprising. I laboured this point because any of the frameworks must deal with this problem.

var data = {};

jQuery(function() {
  $().storyq({
      url: '../../xml/results-01.xml', 
      success: function(feed) {
        data = feed;
        alert('loaded')       
        }
      });
});

function testXML2Object() {
  alert('start test')
  assertEquals(data, {"result"}, "Assume that result is a real/correct object");
  alert('end test')     
}

Frameworks looked at

I looked at three frameworks for xml loading and parsing:

Conclusions

  • In short, QUnit and JsSpec are both as easy as the other. JSUnit seems now to be too heavy given what we now have.
  • QUnit is a TDD framework is used by jQuery itself. I suspect it might survive longer. There are no setup/teardown phases.
  • JsSpec is a BDD framework and doesn’t use jQuery at all but can easily be used with JQuery plugins. There are good setup/teardown phases for tests and suites.
  • Your choice between the two is likely to be your preference between TDD or BDD. It probably depends upon which boundaries you want to test and how you want to go about specifying.

What I didn’t do:

  • integration with a CI system
  • cross-browser testing
  • test with selenium or watir

Naked Planning – Arlo Belshee from Agile 2008

November 11th, 2008 No comments

There is a nice pod cast (Agile Toolkit Podcast) by a guy Arlo Belshee on Agile Toolkit Podcast, Naked Planning, Promiscuous Planning and other Unmentionables (sorry don’t have a url). He has a couple of points that are interesting (around 17-23mins). He is basically using Lean-type queuing for planning. I am interested in his ideas around prioritisation and estimation. He is basically arguing that in prioritisation having estimations creates a selection bias. That is, the business does not necessary choose the options that are best for them (ie delivering business value in the form of cash or a long-term differentiator). He argues against a traditional cost-benefit analysis. This analysis is a 2 × 2 grid – cost (x) by value (y) as scatterplot as below:


          Cost-Benefit
          ============
       ^
high   |                                    long-term value (market differentiator)
       |    short-term value (cash)
       |                                   
(value)|                         
       |                                   
       |    low-cost/low-value (death march)
low    |                                       never done (expensive and little value)
       +--------------------------------------------------------------------------------------------------------------------->
           low                    (cost)                 high 
    

In this approach there are four main positions: (1) top-left is the short-term value options that are cheap and easy to do but provide high value. These tend to get you cash in the market place. But competition can also copy and innovate at the same rate. So they are valuable only in the short term and are also known as cash cows. (2) top-right are the long-value options. These are the market differentiators that you want to build. Because they high a cost, they are harder for the competitions to create. (3) bottom-right are the low cost and value. These are the items that we tend to think of as quick wins. He points out that they are the death march. They merely distract from delivering value. (4) bottom-right are normally thrown out and not a problem.

He argues that we need to remove, the x axis, cost. Prioritisation should only be about business value and lined up in order. In fact, he points out that when you take away the cost, his group opts for high value propositions. This mixture of short and long-term value propositions is a better product mix. But if you leave in the cost, people do tend want to include the death march items.

Therefore, do value-based analysis. Rather than cost-benefit, do benefit-benefit analysis. This approach hide costs, or in fact never does the estimations in the first place. Because having an estimation is a negative business value proposition and actually has a positive cost. It is negative business value proposition because it creates a selection bias that people select non-business value options. It has positive cost because someone has the make up the estimations. It has the other cost that the made up part is just that – made up. Better to work with a queue of real wait time over the duration of the project.

He continues on to talk about estimations as wait time before starting the next MMF.

Categories: Uncategorized Tags: , , , ,

Dokkit vs Blog entry

November 10th, 2008 No comments

I have a problem how do I work both online and offline on my blog? If you look at say Fowler’s Bliki then he is clearly composing offline and then publishing. Looking in the Ruby world, and having had a previous life writing documentation, I looked at Rote and then Dokkit. I thought that I was writing something more like a book in structure. First, forget Rote as it seems to have been discontinued – and has a known bug with the latest version of Rake that still hasn’t been patched. Plus, you tend to need to install RMagick too (all too much time). And Dokkit works well. Dokkit creates websites from individual pages and can be published to multiple formats (text, html, pdf). I thought that I could leverage this because I wanted to write some tutorials as well as entries and could publish them in multiple formats. I found a couple of problems for me (rather than of Dokkit):

  • I wanted to publish to WordPress and Dokkit uses Deplate as it markdown
  • There is no TextMate bundle available (editing or publishing)

Once I started to actually write the Textmate bundle, I found that in fact what I wanted to do was use the Blogging bundle in Textmate. All I need to do is create Blog entries and publish them individual. And that is what this entry is done in. Now, I have a directory of entries that I can update offline, are in source control and are publishable.

Main steps using Blogging bundle in Textmate and publishing to WordPress

Setup WordPress credentials

  • Bundles » Blogging » Setup Blogs
  • Enter in your Word press username and site as per the instructions

Create your file

  • Start up Textmate
  • File » New From Template » Blogging » Blogging Post (Textile|Markdown)
  • Save
  • Make your updates
  • Update your headers: Bundles » Blogging » Headers » add as needed (If you want categories then you will have needed to already setup from the previous set – cat and then tab)

Note: to get the tab triggers to work I had to change the scope selector adding textile to make it text.html.textile.

For example:

Title: Dokkit vs Blog entry
Blog: norockets
Keywords: 
Tags: 
Comments: On

I have a problem how do I work both online and offline on my blog? If you look at say Fowler's "Bliki":http://www.martinfowler.com/bliki/ then he is clearly composing offline and then publishing. Looking in the Ruby world, and having had a previous life writing documentation, I looked at "Rote":http://rote.rubyforge.org and then "Dokkit":http://dokkit.rubyforge.org. I thought that I was writing something more like a book in structure. First, forget Rote as it... 

Checkin/Commit your work

I’m using Bazaar – otherwise there is SVN and Git. Personally, I need to back it up at some stage too because Bazaar isn’t centrally located for my personal work (I rsync it to another server)

Publish to WordPress and check

This is all via the Blogging bundle

* Preview first 
* Post to Blog
* View online version

Migration issues

If you have previously created blog entries but don’t have them locally the Blogging bundle can let you pull those down easy too

* Fetch Posts 
* Select from the returned list
* Save
Categories: Uncategorized Tags: , , ,