Archive

Posts Tagged ‘test automation pyramid’

Object Mothers as Fakes

July 30th, 2009 No comments

Update: I was in a pairing session and we decided that that this syntax is way too noisy. So we have written it here. Effectively to a.Banner.isValid().with(new{Name="john"})

This is a follow to Fakes Arent Object Mothers Or Test Builders. Over the last few weeks I have had the chance to reengage with MO and fakes. I am going to document a helper or two with building fakes using extensions. But before that I want to make an observation or two:

  • I have been naming my mother objects as ValidMO.xxxxx and making sure they are static
  • I find that I have a ValidMO per project and that is better than a shared one. For example, in my unit tests the validMO tends to need Ids populated and that is really good as I explore the domain and have to build up the object graph. In my integration tests, however, Id shouldn’t be populated because that comes from the database and of course changes over time. My MOs in the regression and acceptance test are different again. I wouldn’t have expected this. It is great because I don’t have to spawn yet another project just to hold the MO data as I have in previous projects.
  • I find that I only need one MO per object
  • Needing another valid example suggests in fact a new type of domain object
  • I don’t need to have invalid MOs (I used to)

Now for some code.

When building objects for test, the pattern is to either:

  • provide the MO
  • build the exceptions in a new object and fill out the rest with the MO
  • not use MO at all (very rarely)
  • I use this pattern to build objects with basic objects and not Lists – you’ll need to extend the tests/code for that

To do this I primarily use a builder as an extension. This extension differs between unit and integration tests.

Unit test object builder

This code has two tests: Valid and Invalid. The tests demonstrate a couple of things. The first of which is that I am bad by having three tests in the first test. Let’s just say that’s for readability ;-) The first set of tests check that I have setup the ValidMO correctly and that it behaves well with the extension helper “SetupWithDefaultValuesFrom”. This series of tests has been really important in the development of the extensions and quickly point to problems – and believe me this approach hits limits quickly. Nonetheless it is great for most POCOs.

The second test demonstrates testing invalid by exception. Here there rule is that the image banner is invalid if it has no name. So the code says, make the name empty and populate the rest of the model.

  using Core.Domain.Model;
  using Contrib.TestHelper;
  using Microsoft.VisualStudio.TestTools.UnitTesting;
  using UnitTests.Core.Domain.MotherObjects;
  using NBehave.Spec.MSTest;

  namespace UnitTests.Core.Domain.Model
  {
      /// <summary>
      /// Summary description for Image Banner Property Validation Test
      /// </summary>
      [TestClass]
      public class ImageBannerPropertyValidationTest
      {
          [TestMethod]
          public void Valid()
          {
              ValidMO.ImageBanner.ShouldNotBeNull();
              ValidMO.ImageBanner.IsValid().ShouldBeTrue();
              new ImageBanner().SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeTrue();
          }

          [TestMethod]
          public void InvalidHasNoName()
          {
              new ImageBanner { Name = "" }.SetupWithDefaultValuesFrom(ValidMO.ImageBanner).IsValid().ShouldBeFalse();
          }
      } 
  }

Here’s the ValidMO.

  namespace UnitTests.Core.Domain.MotherObjects
  {
      public static class ValidMO
      {
          public static IBanner ImageBanner
          {
              get
              {
                  return new ImageBanner
                             {
                                 Id = 1,
                                 Name = "KiwiSaver",
                                 Url = "http://localhost/repos/first-image.png",
                                 Destination = "http://going-to-after-click.com",
                                 Description = "Kiwisaver Banner for latest Govt initiative",
                                 IsActive = true,
                                 IsDeleted = false
                             };
              }
          }
      }
  }

At this stage, you should have a reasonable grasp of the object through the tests and what we think is exemplar scenario data. Here’s the model if you insist.

  using Castle.Components.Validator;

  namespace Core.Domain.Model
  {
      public class ImageBanner : Banner
      {
          [ValidateNonEmpty, ValidateLength(0, 200)]
          public virtual string Url { get; set; }

          [ValidateNonEmpty, ValidateLength(0, 200)]
          public string Destination { get; set; }
      }
  }

Builder extensions

Now for the extensions that hold this code together. It is a simple piece of code that takes the model and OM and merges them. Before we go too far. It is actually little more ugly than I want it to be and haven’t had time to think of a more elegant solution. There are actually two methods. One that overrides the model based on null/empty properties and one that also see zero ints and empty strings also as null/empty. So generally, you need to two.

  using System;
  using System.Linq;
  using Contrib.Utility;

  namespace Contrib.TestHelper
  {
      /// <summary>
      /// Test helpers on Mother objects
      /// </summary>
      public static class MotherObjectExtensions
      {

          /// <summary>
          /// Setups the specified model ready for test by merging a fake model with the model at hand. The code merges
          /// the properties of the given model with any defaults from the fake. If the value of a property on the model is an int and its value is 0 
          /// it is treated as null and the fake is used instead.
          /// overridden 
          /// </summary>
          /// <typeparam name="T"></typeparam>
          /// <param name="model">The model.</param>
          /// <param name="fake">The fake.</param>
          /// <returns>the model</returns>
          public static T SetupWithDefaultValuesFrom<T>(this T model, T fake)
          {
              var props = from prop in model.GetType().GetProperties() // select model properities to populate the fake because the fake is the actual base
                          where prop.CanWrite
                          && prop.GetValue(model, null) != null
                          && (
                              ((prop.PropertyType == typeof(int) || prop.PropertyType == typeof(int?)) && prop.GetValue(model, null).As<int>() != 0)
                           || ((prop.PropertyType == typeof(long) || prop.PropertyType == typeof(long?)) && prop.GetValue(model, null).As<long>() != 0)
                           || (prop.PropertyType == typeof(string) && prop.GetValue(model, null).As<string>() != String.Empty)
                              )
                          select prop;

              foreach (var prop in props)
                  prop.SetValue(fake, prop.GetValue(model, null), null); //override the fake with model values

              return fake;
          }
   
          /// <summary>
          /// Setups the specified model ready for test by merging a fake model with the model at hand. The code merges
          /// the properties of the given model with any defaults from the fake. This method is the same as <see cref="SetupWithDefaultValuesFrom{T}"/>
          /// except that empty strings or 0 int/long are able to be part of the setup model
          /// overridden 
          /// </summary>
          /// <typeparam name="T"></typeparam>
          /// <param name="model">The model.</param>
          /// <param name="fake">The fake.</param>
          /// <returns>the model</returns>
          public static T SetupWithDefaultValuesFromAllowEmpty<T>(this T model, T fake)
          {
              var props = from prop in model.GetType().GetProperties() // select model properities to populate the fake because the fake is the actual base
                          where prop.CanWrite
                          && prop.GetValue(model, null) != null
                          select prop;

              foreach (var prop in props)
                  prop.SetValue(fake, prop.GetValue(model, null), null); //override the fake with model values

              return fake;
          }
      }
  }

oh, and I just noticed there is another little helper in there too. It is the object.As helper that casts my results in this case as a string. I will include it and thank Rob for the original code and Mark for an update – and probably our employer for sponsoring our after-hours work ;-) :

    using System;
    using System.IO;


    namespace Contrib.Utility
    {
        /// <summary>
        /// Class used for type conversion related extension methods
        /// </summary>
        public static class ConversionExtensions
        {
            public static T As<T>(this object obj) where T : IConvertible
            {
                return obj.As<T>(default(T));
            }

            public static T As<T>(this object obj, T defaultValue) where T : IConvertible
            {
                try
                {
                    string s = obj == null ? null : obj.ToString();
                    if (s != null)
                    {
                        Type type = typeof(T);
                        bool isEnum = typeof(Enum).IsAssignableFrom(type);
                        return (T)(isEnum ?
                            Enum.Parse(type, s, true)
                            : Convert.ChangeType(s, type));
                    }
                }
                catch
                {
                }
                return defaultValue; 
            }

            public static T? AsNullable<T>(this object obj) where T : struct, IConvertible
            {
                try
                {
                    string s = obj as string;
                    if (s != null)
                    {
                        Type type = typeof(T);
                        bool isEnum = typeof(Enum).IsAssignableFrom(type);
                        return (T)(isEnum ?
                            Enum.Parse(type, s, true)
                            : Convert.ChangeType(s, type));
                    }
                }
                catch
                {

                }
                return null;
            }

            public static byte[] ToBytes(this Stream stream)
            {
                int capacity = stream.CanSeek ? (int)stream.Length : 0;
                using (MemoryStream output = new MemoryStream(capacity))
                {
                    int readLength;
                    byte[] buffer = new byte[4096];

                    do
                    {
                        readLength = stream.Read(buffer, 0, buffer.Length);
                        output.Write(buffer, 0, readLength);
                    }
                    while (readLength != 0);

                    return output.ToArray();
                }
            }

            public static string ToUTF8String(this byte[] bytes)
            {
                if (bytes == null)
                    return null;
                else if (bytes.Length == 0)
                    return string.Empty;
                var str = System.Text.Encoding.UTF8.GetString(bytes);

                // If the string begins with the byte order mark
                if (str[0] == '\xFEFF')
                    return str.Substring(1);
                else
                {
                    return str;
                }
            }
        }
    }
    

Finally, if actually do use this code here are tests:

  using Contrib.TestHelper;
  using Microsoft.VisualStudio.TestTools.UnitTesting;
  using NBehave.Spec.MSTest;

  namespace Contrib.Tests.TestHelper
  {
      /// <summary>
      /// Summary description for MotherObject Extensions Test
      /// </summary>
      [TestClass]
      public class MotherObjectExtensionsTest
      {

          [TestMethod]
          public void EmptyString()
          {
              var test = new Test { }.SetupWithDefaultValuesFrom(new Test { EmptyString = "this" });
              test.EmptyString.ShouldEqual("this");
          }
          [TestMethod]
          public void ModelStringIsUsed()
          {
              var test = new Test { EmptyString = "that" }.SetupWithDefaultValuesFrom(new Test { EmptyString = "this" });
              test.EmptyString.ShouldEqual("that");
          }
          [TestMethod]
          public void EmptyStringIsAccepted()
          {
              var test = new Test { EmptyString = "" }.SetupWithDefaultValuesFromAllowEmpty(new Test { EmptyString = "this" });
              test.EmptyString.ShouldEqual("");
          }

          [TestMethod]
          public void ZeroInt()
          {
              var test = new Test { }.SetupWithDefaultValuesFrom(new Test { ZeroInt = 1 });
              test.ZeroInt.ShouldEqual(1);
          }
          [TestMethod]
          public void ModelIntIsUsed()
          {
              var test = new Test { ZeroInt = 2 }.SetupWithDefaultValuesFrom(new Test { ZeroInt = 1 });
              test.ZeroInt.ShouldEqual(2);
          }
          [TestMethod]
          public void ZeroIntIsAccpted()
          {
              var test = new Test { ZeroInt = 0}.SetupWithDefaultValuesFromAllowEmpty(new Test { ZeroInt = 1 });
              test.ZeroInt.ShouldEqual(0);
          }

          [TestMethod]
          public void ZeroLong()
          {
              var test = new Test { }.SetupWithDefaultValuesFrom(new Test { ZeroLong = 1 });
              test.ZeroLong.ShouldEqual(1);
          }
          [TestMethod]
          public void ModelLongIsUsed()
          {
              var test = new Test { ZeroLong = 2 }.SetupWithDefaultValuesFrom(new Test { ZeroLong = 1 });
              test.ZeroLong.ShouldEqual(2);
          }
          [TestMethod]
          public void ZeroLongIsAccepted()
          {
              var test = new Test {ZeroLong = 0}.SetupWithDefaultValuesFromAllowEmpty(new Test { ZeroLong = 1 });
              test.ZeroLong.ShouldEqual(0);
          }

          private class Test
          {
              public string EmptyString { get; set; }
              public int ZeroInt { get; set; }
              public long ZeroLong { get; set; }
          }
      }
  }

I hope that helps.

Manual Regressions: an oxymoron?

July 12th, 2009 No comments

I was at a testing talk the other night. It turned to automated testing which got me asking the question. Are manual regressions an oxymoron? I was thinking that no it isn’t if regression is simply the idea of re-testing. I would have thought that re-testing the application for fit-for-purpose is one type of regression, as it is that a particular business rule is applied. But it does make us think about why we are regression-ing in the way we are either manually or automated.

Luckily the speaker replied concurring with my thoughts:

No, I don’t think it’s an oxymoron. It may be cheaper to manually test some (hopefully rare) aspects repeatedly, where a very high cost of automation is unlikely to give sufficient payback over the lifetime of the project. An extreme version of that is automated usability testing, which we can’t do. But I agree that it’s a useful first-step generalisation, given that many people don’t see manual regression testing as an oxymoron under any circumstances.

How then would we capture these regression tests? Are they part of a test plan? How does the team know to do them? Are they part of the done criteria? How do we maintain these lists? How do we make sure that they are followed and that new members are inducted?

It seems to me that there should be scripted (not in the programatic sense) tests. Where these are held is debatable. In my teams, I would want them as close to the source code as possible (ie in the source code). I want the in plain text if at possible. I want them as short as possible. They should resemble smoke tests.

In teams that tend to keep the testers separate to developers, I see them kept in separate sources: excel, word, tracking systems (eg Test Warehouse) or bug trackers (eg JIRA) or systems that are supposed to link everyone (eg TFS 2010).

However, I find that usually these approaches try to hide that there is a lack of commitment to continuous integration, continuous deployments, good test automation strategy layering and a lot of multitasking and handoffs.

Configuration DSL – Part III: Writing a Fluent Interface for Configuration in C#

July 12th, 2009 No comments

This is the third entry on creating configuration DSLs in C#. The previous entry looked at the implementation. The basis of this code is to solve the problem that there are multiple configurations an application may require: development, test and moving through environments up to production. This solution makes the distinction that there are profiles of settings which are distinct from the values in each setting. Before I address the problems in this design let’s try and see what patterns are used in the DSL and relate them to Fowler’s DSL patterns.

Patterns used …

Problems: Making the configuration immutable behind a factory

Configuration DSL – Part II: Writing a Fluent Interface for Configuration in C#

June 25th, 2009 No comments

This is the second entry on creating configuration DSLs in C#. The previous entry looked at the design of the interface. This one looks at the implementation. Just a small note that the configurator does not use an IoC container to solve the problem of dependency injection.

Separating out settings from values: the configurator and the configuration

This design has two main parts to the implementation. The Configurator and the configuration:

  1. Configurator: this wraps the different types of settings and combinations available
  2. Configuration: these are the actual values for a setting

It is important that there is this distinction. The configurator allows you to set defaults or set values based on other values. In many values this allows you to code implicit understandings – or even profiles – of the system.

For example, in the previous entry I talked about three types of profiles: production, acceptance/integration testing and unit testing. The production profile required logging, proxies and credentials to external systems. The tests profiles required stubs and mocks. Now for some code.

Structure of the project

Here’s a quick overview of the code for the basic test application with configuration. If you want a copy of src there’s one at http://github.com/toddb/DSL-Configuration-Sample. The main point is that there is a folder configuration within the core project. In this case, there is an application that accepts the configuration.

  \Core
    Application.cs
    ApplicationRunner.cs
    IApplication.cs
    IAssetService.cs
    IConnector.cs
    IHttpService.cs
    LogService.cs
    \Configuration
      AppConfiguration.cs
      AppConfigurator.cs
      Credentials.cs
      IAppConfiguration.cs
      IAppConfigurator.cs
      KnownDotNetConfig.cs
      Proxy.cs
  \Models
    Asset.cs
  \Tests
    \Acceptance
      ApplicationRunnerTest.cs
      IndividualUpdateVisualisationTests.cs
    \Configuration
      ConfigurationTest.cs
    \Core
      ApplicationRunnerTest.cs
      ApplicationTest.cs
    App.config

The configuration

The configuration class is as you would expect. It requires a number of values. In this case, it just has public getter/setters on each of them. At this stage, the idea is that any profile of settings requires all of these in some form or rather.

using System.Net;
using Core;
using log4net;

namespace Core.Configuration
{
    public class AppConfiguration : IAppConfiguration
    {
        public IAssetService AssetService { get; set;}
        public IConnector IncomingConnector { get; set; }
        public IConnector OutgoingConnector { get; set; }
        public NetworkCredential OutgoingCredentials { get; set; }
        public WebProxy Proxy { get; set;}
        public string BaseUrl { get; set; }
        public ILog Logger { get; set; }
    }
}

The configurator

The configurator is more complex so let’s look at the interface first. It is the list of items that we saw in the first entry. It is important that this interface is as expressive as possible for helping setting up a profile of settings. In this case, you might note that only have one method each for the incoming and outgoing connectors rather than defaults. That is because I would expect people to ignore setting this up explicitly unless they require it. This is implicit knowledge that may be better handled in other ways. There are some others I’ll leave for now.

  using System;
  using Core;
  using log4net;

  namespace Core.Configuration
  {
      public interface IAppConfigurator : IDisposable
      {
          void IncomingConnector(IConnector connector);
          void OutgoingConnector(IConnector connector);
          void BaseUrl(string url);
          void BaseUrlFromDotNetConfig();
          void UseCredentials(string username, string password);
          void UseCredentialsFromDotNetConfig();
          void RunWithNoProxy();
          void RunWithProxyFromDotNetConfig();
          void RunWithProxyAs(string username, string password, string domain, string url);
          void UseLog4Net();
          void UseLoggerCustom(ILog logService);
      }
  }

Now that you have seen the interface and hopefully have that in your head, now we’ll look at the implementation because this is the guts of the DSL. Don’t be surprised when you see that there really isn’t anything to it. All of the members which are implementing the interface in this case merely update the configuration. Have a read and I’ll explain the main simple twist after the code.

  using System;
  using System.Configuration;
  using System.Net;
  using log4net;

  namespace Core.Configuration
  {
      public class AppConfigurator : IAppConfigurator
      {
          private NetworkCredential _credentials;
          private WebProxy _proxy;
          private string _url;
          private ILog _logger;
          private IConnector _incoming;
          private IConnector _outgoing;

          AppConfiguration Create()
          {
              var cfg = new AppConfiguration
                            {
                                IncomingConnector = _incoming,
                                OutgoingConnector = _outgoing,
                                Proxy = _proxy,
                                OutgoingCredentials = _credentials,
                                BaseUrl = _url,
                                Logger = _logger
                            };
              return cfg;
          }

          public static AppConfiguration New(Action<IAppConfigurator> action)
          {
              using (var configurator = new AppConfigurator())
              {
                  action(configurator);
                  return configurator.Create();
              }
          }

          public void IncomingConnector(IConnector connector)
          {
              _incoming = connector;
          }

          public void OutgoingConnector(IConnector connector)
          {
              _outgoing = connector;
          }

          public void BaseUrl(string url)
          {
              _url = url;
          }

          public void BaseUrlFromDotNetConfig()
          {
              _url = ConfigurationManager.AppSettings[KnownDotNetConfig.BaseUrl];
          }

          public void UseCredentials(string username, string password)
          {
              _credentials = Credentials.Custom(username, password);
          }

          public void UseCredentialsFromDotNetConfig()
          {
              _credentials = Credentials.DotNetConfig;
          }

          public void RunWithNoProxy()
          {
              _proxy = null;
          }

          public void RunWithProxyFromDotNetConfig()
          {
              _proxy = Proxy.DotNetConfig;
          }

          public void RunWithProxyAs(string username, string password, string domain, string url)
          {
              _proxy = Proxy.Custom(username, password, domain, url);
          }

          public void UseLog4Net()
          {
              _logger = LogService.log;
          }

          public void UseLoggerCustom(ILog logService)
          {
              _logger = logService;
          }

          public void Dispose()
          {

          }
      }
  }

All of that is pretty simple. The basis of the interface is to allow the profile either get settings for the App.Config with the methods suffixed with FromDotNetConfig or pass in their own (eg (username, password) or logService). The code in those classes is also straightforward.

Classes loading from the DotNet Config

This is very straightforward class which just gets the value through the ConfigurationManager or passes through the username and password. This implementation isn’t complex – so you may want to not call ConfigurationManager every time. I’ll leave that implementation to you.

  using System.Configuration;
  using System.Net;

  namespace Core.Configuration
  {
      public class Credentials
      {
          public static NetworkCredential DotNetConfig
          {
              get
              {
                  return new NetworkCredential(ConfigurationManager.AppSettings["Asset.Username"],
                                               ConfigurationManager.AppSettings["Asset.Password"]);
              }
          }

          public static NetworkCredential Custom(string username, string password)
          {
              return new NetworkCredential(username, password);
          }

      }
  }

The same approach is used in the Credential class. In this example, it only returns a webproxy if a url was given in the config. Again, here’s another example of implicit profile settings.

  using System;
  using System.Configuration;
  using System.Net;

  namespace Core.Configuration
  {
      public class Proxy
      {

          public static WebProxy DotNetConfig
          {
              get
              {
                  WebProxy proxy = null;
                  if (!String.IsNullOrEmpty(ConfigurationManager.AppSettings["Proxy.Url"]))
                  {
                      proxy = new WebProxy(ConfigurationManager.AppSettings["Proxy.Url"], true)
                                  {
                                      Credentials = new NetworkCredential(
                                          ConfigurationManager.AppSettings["Proxy.Username"],
                                          ConfigurationManager.AppSettings["Proxy.Password"],
                                          ConfigurationManager.AppSettings["Proxy.Domain"])
                                  };
                  }
                  return proxy;
              }
          }
          public static WebProxy Custom(string username, string password, string domain, string url)
          {
              return new WebProxy(username, true)
                         {
                             Credentials = new NetworkCredential(username, password, domain)
                         };
          }

      }
  }

By now you should have seen all the basic getter/setters for values. How now are profiles of settings done?

Consuming the configurator

The configurator initiation code is also pretty straightforward and this implementation uses C# actions. Let’s step through the code outline above in two methods Create and New.

When you call the configurator, if you remember, it was through AppConfigurator.New(configuration). This returns a configuration. This is a simple pattern of calling a static method that returns an class instance. In this, New creates a new instance of itself which on the method Create returns a configuration. So where is the real work done?

  AppConfiguration Create()
   {
       var cfg = new AppConfiguration
                     {
                         IncomingConnector = _incoming,
                         OutgoingConnector = _outgoing,
                         Proxy = _proxy,
                         OutgoingCredentials = _credentials,
                         BaseUrl = _url,
                         Logger = _logger
                     };
       return cfg;
   }

   public static AppConfiguration New(Action<IAppConfigurator> action)
   {
       using (var configurator = new AppConfigurator())
       {
           action(configurator);
           return configurator.Create();
       }
   }



The real work in done in the one line action(configurator). This line calls the action/lambda that you passed in to create values in the configurator and then when Create is called these are merged and returned.

So, back to the original code (see below) the action(configurator) will take each cfg.* and run the method. In this case, the first with run BaseUrlFromDotNetConfig. If that doesn’t make sense, think of it this way, at the point of action in the code, it executes each of the cfg.* lines in the context of that class.

  AppConfigurator.New(cfg =>
                          {
                              cfg.BaseUrlFromDotNetConfig();
                              cfg.RunWithProxyFromDotNetConfig();
                              cfg.UseCredentialsFromDotNetConfig();
                              cfg.UseLog4Net();
                          });

So that’s how to get the configuration available for the application start up. You can get a lot more complex but that should get you started. Personally, if you like this approach then go head off in the direction of libraries doing this: nBehave, NHibernate Fluent interface, StoryQ, codecampserver.

Good luck.

The next part tries to summarise this design, how it relates to Fowler’s patterns in DSLs and some problems with this DSL and solutions to this.http://github.com/toddb/DSL-Configuration-Sample/tree/master

DSL – Part I: Writing a Fluent Interface for Configuration in C#

June 24th, 2009 No comments

I spent the weekend writing a configuration domain specific language (DSL) for a client interface. This interface was lifted from TopShelf project on codeplex after listening to podcast on MassTransit. It was all pretty straightforward and expected as you would imagine. I want to document for myself the process of setting one up for the future and in the process map the code to the patterns that Fowler is outlining in his new DSL book. If you are wondering Fowler would call this an internal DSL or fluent interface. By the end of this, I should have got through where I did some method chaining, used a strategy pattern and how I think it has helped my in DDD. What I find though is that the configurator code is not simply about writing cleaner, prettier code.

Most importantly, I want to think through how this approach is good for TDD. What I have noticed the most is that by using a fluent interface for the configuration I found that it (a) showed up code smells because I tried to consume code more often and in different ways and (b) helped me refactor other parts of the code because they no longer had dependencies on configuration code.

The problem domain …

My application is minor application living in a windows service. Every five minutes, it polls a webservice to get a list of items that it must return information about. That external system has a key for the item and relates that key to the key for the same item in the internal system. This is a trivial system that then gets information from the internal system, collates it and then sends that back to the external system. The external caches this information and combines it with its own information. Quite simple – not if you looked at the early versions of the code.

For this explanation, the system needs to be configured with incoming and outgoing feeds which in turn need network credentials and proxy settings. Both feeds have the same domain, credentials and proxy settings. Early version worked well until you needed to test them – or extend. What I would also like to do is have tests around logging because this system must prove that when it isn’t receiving data that someone is getting notified (via logging framework)

What I also require are acceptance tests, integration tests and unit tests. To run each of these it can be hard if you can’t turn settings on and off. Unfortunately, Microsoft’s app.config isn’t really that good at handling all these situations. To be fair it is only XML. But I don’t see easy solutions to this and as such I find others and myself manually changing the XML at times to test edge cases. That might be fine at coding time but this leaves little opportunity for automated regression testing. My success criteria was to allow for default values (eg corporate proxy is http:/proxy.mycompany.com), default settings (proxy=on) and custom versions of both (eg no proxy or mock proxy) within the same test run.

The simplicity of the solution: using blocks for configuration

Let’s start with my first consumer code. This is from the code that creates the configuration for the application. This code is pretty simple, although looking at here, it also looks a little wordy. Hopefully you read it as I do. With the new configuration, I’ll have the base url with values from the dotnet config, I’ll run app with a proxy with the values in the dotnet config, I’ll use the network credential values from the dotnet config and finally I’ll use log4net. In short, that’s the configuration for a corporate network going out through a proxy to a external address which is secure behind credentials.

  var configuration = AppConfigurator.New(cfg =>
                          {
                              cfg.BaseUrlFromDotNetConfig();
                              cfg.RunWithProxyFromDotNetConfig();
                              cfg.UseCredentialsFromDotNetConfig();
                              cfg.UseLog4Net();
                          });

Just a quick note. For those intellisense-ers out there. Yes, when you hit control-space after cfg dot, you see a list of the configuration settings. Here’s a preview of the settings:

  BaseUrl(url)
  BaseUrlFromDotNetConfig
  RunWithProxyAs(username, password)
  RunWithProxyFromDotNetConfig
  UseCredentialsFromDotNetConfig
  UseCredentials(username, password)
  UseLog4Net
  UseCustomLogger
  OutgoingConnector
  IncomingConnector

Interestingly, this particular configuration is rarely run – this is the production configuration (and will require production values). All I tend to need from this configuration for development is a configurable BaseUrl. I’m generally doing work on the local machine, so I’m not going to need a proxy or credentials to the external service. In this case, I won’t tend to use logging yet. But what I will need is the ability to mock out the external service. But before I go there here is how the configuration is hooked into the application. I pass the configuration into the application. [what I will explain later is that at this point I hand in the configuration values having worked out what they are based on the configuration type.]

  TTLApplication.Run(configuration);

Now to the test. These are far more important for development. In the example below, I want to check that in running my application that the Outgoing connector calls the send method (ie that it sends out). In this case, I use Moq as a framework to mock-out the service. This service will is actually just an Http service makes web requests. The code sets up connector to respond to the Send method, runs the application and then verifies that Send was called.

  [TestMethod]
  public void CanMockOutgoingConnector()
  {
      var connector = new Mock<IOutgoingConnector>();
      connector.Setup(x => x.Send());

      Application.Run(AppConfigurator.New(cfg =>
                        { 
                          cfg.OutgoingConnector(connector.Object)
                        }));
      connector.Verify(x => x.Send(), Times.Exactly(1));
  }

You’ll note in both cases, you only setup the what you need. In the first example, the outgoing connector is setup by default. Whereas in the second, it is only the outgoing connector which is setup with a mock.

This configurator settings are in fact more than what you need. The configurator is really good for creating settings – defaults and combinations – but for unit testing it is easier to get down the setting values. Here’s a simple example of above. While there’s not a lot of difference, the important thing is that you are working through less layers because when it gets more complicated you’ll need a greater understanding.

  [TestMethod]
  public void CanMockOutgoingConnector()
  {
      var connector = new Mock<IOutgoingConnector>();
      connector.Setup(x => x.Send());

      Application.Run(new AppConfiguration
                          {
                              IncomingFleet = null,
                              OutgoingUtilization = connector.Object,
                              Url = "http://example.com"
                          });
      connector.Verify(x => x.Send(), Times.Exactly(1));
  } 

At this point, you should be starting to get the idea that this is an easier way to configure the system for multiple purposes. The next entry will look through the code that implements it.

<br />

jQuery and testing – JSUnit, QUnit, JsSpec [Part 1]

November 15th, 2008 No comments

Trying JsUnit with JQuery

I have started first with JSUnit because it is tried and true (and to tell the truth I thought it would be fine and didn’t bother with a quick search for alternatives).

For the impatient, I won’t be going with JSUnit and here are some reasons:

  • the problem is that the integration of a setup (ie onload – pausing) to load the data doesn’t integrate well with jQuery. JSUnit has its own setup and document loader but I am still wondering how to do this transparently (ie I didn’t actually get the test to work – I wasn’t exhaustive but then again I don’t think that I should have needed to be to get this test going)
  • Firefox 3.0 on mac doesn’t integrate well (ie it doesn’t work), but safari does! Unfortunately, I am a little bound to firebug for development.
  • JSUnit doesn’t report tests well either

I went and had a look at how JSUnit does it. (Remember that this framework has been around a lot longer than jQuery.) Here is the extract from the code/test samples. The basic setup is to hook into an existing testManager that existing within a frame and then get the data from there. Furthermore, you need to manage your own flag that the process has been complete. JSUnit then looks through all functions that start with test in this case testDocumentGetElementsByTagName checks expected data. Here I assume that the tests are run in a particular frame (buffer()) that testManager gives us access to.

var uri = 'tests/data/data.html';

function setUpPage() {
    setUpPageStatus = 'running';
    top.testManager.documentLoader.callback = setUpPageComplete;
    top.testManager.documentLoader.load(uri);
}

function setUpPageComplete() {
    if (setUpPageStatus == 'running')
        setUpPageStatus = 'complete';
}

function testDocumentGetElementsByTagName() {
    assertEquals(setUpPageStatus, 'complete');
    var buffer = top.testManager.documentLoader.buffer();
    var elms = buffer.document.getElementsByTagName('P');
    assert('getElementsByTagName("P") returned is null', elms != null);
    assert('getElementsByTagName("P") is empty', elms.length > 0);
}

Below is the rewritten code to exercise my code. Here’s a couple of the features:

  • for setup, pass in the correct xml file via uri variable (obviously)
  • to test, I have written a test testXML2Object.

    p. There is one major design problem with the code itself that didn’t allow me to use my own data loader. You will see the line var feed = new StoryQResults(buffer);. Where did that come from? It is nothing close to the code I said that I wanted to exercise. It is infact from within the code I want to exercise. The major issue I found here is that to load and test data I had to use the testManager rather than use my own ().storyq() call.

The other problem was it wouldn’t return the result that I wanted either. I was expecting my feed variable to be an object of the results. Instead I was getting a reference to function StoryQResults – given now that it wasn’t running in Firefox and I didn’t have Firebug life was getting a little hard.

var uri = '../../xml/results-01.xml';

function setUpPage() {
    setUpPageStatus = 'running';
    top.testManager.documentLoader.callback = setUpPageComplete;
    top.testManager.documentLoader.load(uri);
}

function setUpPageComplete() {
    if (setUpPageStatus == 'running')
        setUpPageStatus = 'complete';
}

function testXML2Object() {
    assertEquals(setUpPageStatus, 'complete');
    var buffer = top.testManager.documentLoader.buffer();
    
    var feed = new StoryQResults(buffer);               

    assertEquals(feed.version, '0.1')
    assertEquals("Number of stories", $(feed).size(), 1)
    $.each(feed, function(){
      alert(this)               
    })

}

Even though I know that I getting a function returned instead of an object I am still going to see if I can invoke my own loading function within JSUnit. Here’s what the code would look like below. I wouldn’t recommend running it – just take a look at it. The code to me is a mixtures of styles that start to bloat the code. On the one hand, JSUnit has this setup phase with explicit flags and no anonymous functions. On the other hand, because I am using JQuery conventions, I encapsulate alot of that logic. For example, jQuery(function(){}) waits for the page to be loaded before executing ("#tree").story(), Then I have the callback function inline. It looks good from the outside, but it doesn’t work.

The order of calls is: loading, in test and then loaded. Indicating that my JQuery function runs after the test has been run. The order should have been loading, loaded and then in test. In this sense, while setUpPage runs within its own setup/test/teardown cycle. But my JQuery call isn’t linked into this. JQuery waits is waiting on a document flag rather than a custom flag (within testManager). At this point, I do not wish to dig into these libraries to work them out to get it to all play nicely. It wasn’t designed to work this way. Let’s find one that was.

var data = '';

function setUpPage() {
    setUpPageStatus = 'running';
    alert('loading')
    jQuery(function() {
      $("#tree").storyq({
          url: '../../xml/results-01.xml', 
          success: null, 
          load: function(feed) {
            data = feed;
            alert('loaded')
            setUpPageComplete()
            }
          });
    });
}

function setUpPageComplete() {
    if (setUpPageStatus == 'running')
        setUpPageStatus = 'complete';
}

function testXML2Object() {
    alert('in test')
    assertEquals(setUpPageStatus, 'complete');

    assertEquals(data.version, '0.1')
    assertEquals("Number of stories", $(feed).size(), 1)
}

I’m invoking the two-feet principle: I’m moving onto the next framework (after a quick search): QUnit

jQuery and testing – JSUnit, QUnit, JsSpec [Part 3]

November 14th, 2008 No comments

Trying JsSpec with jQuery

This is part three of three. The previous two have been focussed around managing the problem of timing: JSUnit got too hard and QUnit is easy but you still have to manage timings yourself. With JsSpec there is no problem because it is all managed for you. Nice work! Here’s the code I had to write.

A couple of things to writing it. I had to dig into the code to find out the setup/teardown lifecycle keywords. There turns out to be setup/teardown per test (eg before) and per test suite (eg before all). I also had to dig around to find then comparators (eg should_be, should_not_be_null). I couldn’t find any documentation.

describe('I need to read the xml and convert into object', {
  'before all': function() {
    target = {};
    $().storyq({
        url: 'data/results-01.xml', 
        load: '',
        success: function(feed) {
          target = feed
      }
    })
   
  },
  
  'should return an object': function() {
    value_of(target).should_not_be_null()
  },
  
  'should not be a function': function() {
    value_of(typeof target).should_not_be(typeof Function )
  },
  
  'should have a version': function(){
    value_of(target.version).should_be('0.1')
  },
  
  'should have items': function(){
    value_of(target.items).should_not_be_empty()
  },
  
  'should have the same value as the reference object in data/results-01.js': function(){
    value_of(reference).should_not_be_undefined()
    value_of(target).should_be(reference)
  },
  
})

Ihe output looks nice too ;-) Here’s the overall code. Notice that I have also used the technique of a reference object in results-01.js:

&lt;html>
&lt;head>
&lt;title>JSSpec results&lt;/title>
&lt;link rel="stylesheet" type="text/css" href="../lib/jsspec/JSSpec.css" />
&lt;script type="text/javascript" src="../lib/jsspec/JSSpec.js"/>

&lt;script src="../lib/jquery/jquery.js"/>
&lt;script src="../lib/jquery/jquery.cookie.js" type="text/javascript"/>
&lt;script src="../lib/treeview/jquery.treeview.js" type="text/javascript"/>
&lt;script src="../build/dist/jquery.storyq.js" type="text/javascript"/>

&lt;script type="text/javascript" src="data/results-01.js"/>
&lt;script type="text/javascript" src="specs/treeview.js"/>  

&lt;script type="text/javascript">

  describe('I need to read the xml and convert into object', {
    'before all': function() {
      target = {};
      $().storyq({
          url: 'data/results-01.xml', 
          load: '',
          success: function(feed) {
            target = feed
        }
      })

    },

    'should return an object': function() {
      value_of(target).should_not_be_null()
    },

    'should not be a function': function() {
      value_of(typeof target).should_not_be(typeof Function )
    },

    'should have a version': function(){
      value_of(target.version).should_be('0.1')
    },

    'should have items': function(){
      value_of(target.items).should_not_be_empty()
    },

    'should have the same value as the reference object in data/results-01.js': function(){
      value_of(reference).should_not_be_undefined()
      value_of(target).should_be(reference)
    },

  })
&lt;/script>

&lt;/head>
    &lt;body>
      &lt;div style="display:none;">&lt;p>A&lt;/p>&lt;p>B&lt;/p>&lt;/div>
    &lt;/body>
&lt;/html>

JSSpec isn’t written using JQuery. So there are a couple of issues that I can’t pin down. When I get errors it stops the tests completely. I suspect that this is because these tests are using callbacks and they don’t return an (JQuery) object. JQuery does alot of object chaining and JsSpec isn’t cut out for it (I think).

Well, that’s it.

jQuery and testing – JSUnit, QUnit, JsSpec [Introduction]

November 12th, 2008 No comments

I had been writing a jQuery parser and then realised once I had spiked it that I hadn’t actually written any tests. So, these are some results from a spike in unit testing a jQuery plugin.

Some background, the plugin is a results viewer from an xml feed from storyq. So, I have run some tests and have results. Now I want to see them in html format. The plugin merely transforms the xml to be displayed using the treeview plugin. I wanted to avoid handing in a json feed formatted specifically for the treeview. I wanted all this to happen client side.

The tests have two aspects:

  • xml loading and parsing into an object
  • rendering the object into a tree (at that point treeview takes over)

    In short, I want to test the code underlying this call that returns the feed before populating an <ul id="tree"> element:
$('#tree').storyq({
    url: 'tests/data/results-01.xml',   
    success: function(feed) {

      $("#tree").treeview(); //populate the tree
  
  }    
});

Problem for any framework: sequencing

Let’s take a look at what I want as test code. In this code, I want to populate data with the feed variable returned in the success callback. The test can then check for values. Take a look at the code below. When I run the code, I should (ideally) see the sequence of alerts: loaded, start test, end test. Of course, I don’t. I see start start test, end test, loaded as the sequence. That should be obvious that the callback success hasn’t been called as yet: javascript is run sequentially. Okay, nothing here is surprising. I laboured this point because any of the frameworks must deal with this problem.

var data = {};

jQuery(function() {
  $().storyq({
      url: '../../xml/results-01.xml', 
      success: function(feed) {
        data = feed;
        alert('loaded')       
        }
      });
});

function testXML2Object() {
  alert('start test')
  assertEquals(data, {"result"}, "Assume that result is a real/correct object");
  alert('end test')     
}

Frameworks looked at

I looked at three frameworks for xml loading and parsing:

Conclusions

  • In short, QUnit and JsSpec are both as easy as the other. JSUnit seems now to be too heavy given what we now have.
  • QUnit is a TDD framework is used by jQuery itself. I suspect it might survive longer. There are no setup/teardown phases.
  • JsSpec is a BDD framework and doesn’t use jQuery at all but can easily be used with JQuery plugins. There are good setup/teardown phases for tests and suites.
  • Your choice between the two is likely to be your preference between TDD or BDD. It probably depends upon which boundaries you want to test and how you want to go about specifying.

What I didn’t do:

  • integration with a CI system
  • cross-browser testing
  • test with selenium or watir