Archive

Archive for August, 2010

storyq-vs-slim-fitnesse

August 24th, 2010 1 comment

Acceptance testing from user stories via StoryQ and Slim

Recently I was working with a team using user stories for acceptance testing (in .Net). As it turned out I had the opportunity to implement the acceptance tests in both Slim/fitnesse and StoryQ. To me, either framework is fine because they are both the same strategy: they create an abstraction that is understood by the customer that we can run against the system under test. Below is an extract of the user story that was discussed with the customer (and there was also a sketch of the GUI that I haven’t included) and what I’ll show is the StoryQ and Slim implementations up to the point of hooking into the system under test (SUT). The process for each is pretty similar of a two set process: (1) code up the test as the customer representation (2) create (C#) code that wraps hooking into the SUT

    
                  step one           step two
 ------------------------------------------------
  user story  ->  xUnit StoryQ    -> TestUniverse
  user story  ->  story test wiki -> Fixtures

User story

This story was written in plain and developed over three cycles over the course of an hour. There was actually about 7 scenarios in the end.

As a Account Manager
I want to cater for risks terminated in System1 after extraction to System2
So that I offer a renewal to the Broker

    Given I have a policy                                                                                                                                        
      And it has a current risk in System2                                                                                                                           
    When the risk is terminated in System1                                                                                                                        
    Then the risk is deleted in System2                    

    Given I have a policy                                                                                                                                      
    When a deleted risk later than the load date is added                                                                                                    
    Then it should be rerated      
                                                                                                                      
    Given I have a policy                                                                                                                                        
      And it has an current risk in System2                                                                                                                          
    When its status is ready to review                                                                                                                           
      And a risk is deleted in System1                                                                                                                            
    Then the policy should be rated                                                                                                                              
      And the policy should remain in the same status        

Example One: StoryQ

I took this user story and then converted it into StoryQ using the StoryQ converter that you can get with the library. StoryQ has a slight difference: (a) it uses the In order syntax and (b) I had to have a Scenario with each Given/When/Then. It took me in total about half an hour to download, add as a reference and convert the story.

Story to storyq via converter

When doing the conversion, the converter will error until it is in the correct format. Here’s what I had to rewrite to:

Story is Terminated risks after extractions
  In order to offer renewal to the Broker
  As a Account Manager
  I want to cater for risks terminated in System1 after extraction to System2

 With scenario risks become deleted in System2
   Given I have a policy                                                                                                                                        
     And it has a current risk in System2                                                                                                                           
  When the risk is terminated in System1                                                                                                                        
    Then the risk is deleted in System2                    

 With scenario newly deleted risks need to be rerated
   Given I have a policy                                                                                                                                      
   When a deleted risk later than the load date is added                                                                                                    
   Then it should be rerated      

 With scenario status being reviewed
   Given I have a policy                                                                                                                                        
     And it has an current risk in System2                                                                                                                          
   When its status is ready to review                                                                                                                           
     And a risk is deleted in System1                                                                                                                            
   Then the policy should be rated                                                                                                                              
     And the policy should remain in the same status            

The output of the converter in C# is:

new Story("Terminated risks after extractions")
       .InOrderTo("offer renewal to the Broker")
       .AsA("Account Manager")
       .IWant("to cater for risks terminated in System1 after extraction to System2")

       .WithScenario("risks become deleted in System2")
       .Given("I have a policy")
       .And("it has a current risk in System2")
       .When("the risk is terminated in System1")
       .Then("the risk is deleted in System2")

       .WithScenario("newly deleted risks need to be rerated")
       .Given("I have a policy")
       .When("a deleted risk later than the load date is added")
       .Then("it should be rerated")

       .WithScenario("status being reviewed")
       .Given("I have a policy")
       .And("it has an current risk in System2")
       .When("its status is ready to review")
       .And("a risk is deleted in System1")
       .Then("the policy should be rated")
       .And("the policy should remain in the same status")

StoryQ into xUnit

From this output you add it into a xUnit test framework – so far either MSTest or NUnit. We were using MSTest and it looks like this. We need to add

  • Testclass, Testmethod as per MSTest
  • I add indentation myself
  • .ExecuteWithReport(MethodBase.GetCurrentMethod()) at the end
using System.Reflection;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using StoryQ;

namespace Tests.System.Acceptance
{
    [TestClass]
    public class TerminatedRiskAfterExtractionTest
    {
        [TestMethod]
        public void Extraction()
        {
            new Story("Terminated risks after extractions")
                .InOrderTo("offer renewal to the Broker")
                .AsA("Account Manager")
                .IWant("to cater for risks terminated in System1 after extraction to System2")

                .WithScenario("risks become deleted in System2")
                    .Given("I have a policy")
                        .And("it has a current risk in System2")
                    .When("the risk is terminated in System1")
                    .Then("the risk is deleted in System2")

                .WithScenario("newly deleted risks need to be rerated")
                    .Given("I have a policy")
                    .When("a deleted risk later than the load date is added")
                    .Then("it should be rerated")

               .WithScenario("status being reviewed")
                   .Given("I have a policy")
                       .And("it has an current risk in System2")
                   .When("its status is ready to review")
                       .And("a risk is deleted in System1")
                   .Then("the policy should be rated")
                       .And("the policy should remain in the same status")

                .ExecuteWithReport(MethodBase.GetCurrentMethod());
        }
    }
}

Link to the SUT (TestUniverse)

StoryQ needs to link the user story test text to the SUT. One clean approach is to use a TestUniverse – and as you will see it replicates the idea of the fitnesse fixture. A quick summary as you look at the code:

  • add a class TestUniverse, have a reference to it available (_t_) and instatiate it in the Setup()
  • convert string text to methods eg ("I have a policy" to _t.IHaveAPolicy)
  • add methods to the TestUniverse (personally I use resharper and it does the work for me)
  • quite quickly you get reuse patterns eg (_t.IHaveAPolicy)
  • I tend to create methods only as I need them because it gives me a better sense of tests actually implemented
    using System.Reflection;
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using StoryQ;

    namespace Tests.System.Acceptance
    {
        [TestClass]
        public class TerminatedRiskInSystem1AfterExtractionTest
        {
            private TestUniverse _t;

            [TestInitialize]
            public void Setup()
            {
                _t = new TestUniverse();
            }

            [TestMethod]
            public void Extraction()
            {
                new Story("Terminated risks after extractions")
                    .InOrderTo("offer renewal to the Broker")
                    .AsA("Account Manager")
                    .IWant("to cater for risks terminated in System1 after extraction to System2")

                    .WithScenario("risks become deleted in System2")
                    .Given(_t.IHaveAPolicy)
                        .And("it has a current risk in System2")
                    .When("the risk is terminated in System1")
                    .Then("the risk is deleted in System2")

                    .WithScenario("newly deleted risks need to be rerated")
                    .Given(_t.IHaveAPolicy)
                    .When("a deleted risk later than the load date is added")
                    .Then("the policy should be rerated")

                    .WithScenario("status being reviewed")
                    .Given(_t.IHaveAPolicy)
                        .And("it has an current risk in System2")
                    .When(_t.ItsStatusIsReadyToReview)
                        .And("a risk is deleted in System1")
                    .Then("the policy should be rated")
                        .And("the policy should remain in the same status")

                    .ExecuteWithReport(MethodBase.GetCurrentMethod());
            }

        }

        public class TestUniverse
        {
            // private variables here if needed

            public void IHaveAPolicy()
            {
                // Hook up to SUT
            }

            public void ItsStatusIsReadyToReview()
            {
                // Hook up to SUT
            }

            public void ThePolicyShouldBeReRated()
            {
                // Hook up to SUT
                // Assertions too
            }

        }
    }   

Hopefully, you can see the TestUniverse helps us keep the two parts of the test separated – and of course, you could have them in separate files if you really wanted.

Now when you run this test through your favourite xUnit runner (MSTest, Resharper, TestDriven.Net, Gallio) you will get the results of the test. Or more appropriately, I have the tests ready and waiting. Now we implement the system before return to these after to hook up. Yes, I confess, these system tests are test-last! I explain that somewhere else.

Example Two: Slim

Having got the code in StoryQ to this point, we decided to investigate what the same user story would look like in Slim. This exercise turned out to take somewhat longer. In C#, there are no specific libraries for implementing the Given/When/Then although there is one in java (givenWenZen). So we given our stories we implemented them using a Script table. This turned out to the fruitful.

Slim story test wiki

Here are the wiki tables. In the actual, wiki we also included the original user story around the table which for this example I have included only the scenario.

With scenario risks become deleted in System2

|script|Risk Terminated                  |
|check |I have a policy                 ||
|check |it has a current risk in System2||
|terminate risk                          |
|ensure|risk is deleted                  |

With scenario newly deleted risks need to be rerated

|script|Risk Terminated                                |
|check |I have a policy                               ||
|check |deleted risk later than the load date is added||
|ensure|risk is rerated                                |

With scenario status being reviewed       

|script|Risk Terminated                                        |
|check |I have a policy                        |               |
|check |it has a current risk in System2       |               |
|check |status is in                           |ready to review|
|check |risk is deleted                        |               |
|ensure|policy is rerated                                      |
|check |policy should remain in the same status|ready to review|

My working knowledge of the check, ensure, reject took a while to get up to speed including the mechanism for reporting back values in empty cells (eg |check|I have a policy||). Other than that it took as about an hour to get on top of structure for the table in this case. Script table has worked okay but it did mean that we went from a user story to a story test that in the future we should go straight to. Having said that the user story Given/When/Then is nice for its natural language syntax and helps avoids sticking solely to GUI-type actions.

Slim fixture

The slim fixture is pretty straight forward. We hardcoded in our policy and risk to start with. Later on we would swap this out for the system.

namespace Tests.System.Acceptance
{
    public class RiskTerminated
    {

        public string IHaveAPolicy()
        {
            return "12-4567846-POF";
        }

        public string ItHasACurrentRiskInSystem2()
        {
            return "001";
        }

        public bool terminateRisk()
        {
            return true;
        }

        public bool riskIsDeleted()
        {
            return true;
        }

        public string deletedRiskLaterThanTheLoadDateIsAdded()
        {
            return string.Empty;
        }

        public bool RiskIsRerated()
        {
            return true;
        }

        public bool PolicyIsRerated()
        {
            return true;
        }

        public string statusisin()
        {
            return string.Empty;
        }

        public string policyShouldRemainInTheSameStatus()
        {
            return string.Empty;
        }

        public bool getAListOfCancelledRisksFromSystem1()
        {
            return true;
        }

        public bool theRiskIsNotAvailableInSystem2()
        {
            return true;
        }
    }
}

Now that this is all working we can run the tests in Fitnesse. We already had Fitnesse up and running with Slim build to the correct version. Don’t underestimate the time and expertise needed to get this all up and running – it’s not hard, after the first time that is! We could have also got our wiki running under a Generic Test in Visual Studio but we haven’t gone that far yet.

Some general comments

If you were attentive to the style of tests, you might have noticed that we passed in no values for the tests. In this testing, the setup is limited to fetching existing data and then making changes from there. Because we are interested in knowing which pieces of data were touched the tests do want, in this case, the policies reported back. This is not ideal but pragmatic for the state of the system. Both StoryQ and Slim can report back this type of information – in these examples, Slim has this implemented but not StoryQ.

  • StoryQ has the least barrier to entry and keeps the tests focussed at the developers
  • StoryQ is easy to get onto a build server
  • StoryQ has much better for refactoring because it is in C# in an IDE and is not using (magic) string matching – you could though
  • StoryQ is generally unknown and is harder to introduce particularly to devs that haven’t really grasped Fitnesse
  • Cucumber could just as easily be used (I just thought that I would throw that in from left field)
  • Given/When/Then may be limited as a rubric for testing – but those constraints are also benefits
  • Slim allows realistically allows more comments around tests
  • Wiki reporting is potentially richer and closer to the customer
  • Separate systems of a wiki and code is good separation
  • The wiki and code are nonetheless two different systems – that comes at a cost
  • If the wiki isn’t driving development then I’m not convinced it is cost effective
  • Slim required more knowledge around which table to use and its constraints – that’s (sort of) a one-time cost
  • I wouldn’t mind a Given/When/Then table for Slim – apparently UncleBob might be writing one?

Rock on having customers

A final note in this example. I most interested in the benefits of this type of testing rather than the specific implementation. Anything that allows us to have more conversations with our code base and the people who develop it the better. By writing the user stories down before development means we need to do the “outside in” thing first. By coming back to them, I have found that it flushes assumptions made in development and makes us remember our earlier discussions. It easy to forget that our customers are often good at helping us create abstractions in our code base that we loose sight of in the middle coding the solution. Those abstractions live in the user stories – to hook up the user stories to the SUT we often need to decouple our code in ways that is not required when our only client code of the SUT is the application/UI. So I say rock on having customers that require us to have proxies in the form of code to represent them!

Categories: Uncategorized Tags:

test-automation-pyramid-and-webservices

August 24th, 2010 No comments

Using web services is easy. That is if we listen to vendors. Point to a wsdl and hey presto easy data integration. Rarely is it that easy for professional programmers. We have to be able to get through to endpoints via proxies, use credentials and construct the payloads correctly and all this across different environments. Add the dominance of point-and-click WSDL integration, many developers I talk to do really work at the code level for these types of integration points or if they do it is to the extent of passive code generators. So to suggest TDD on web services is, at best, perplexing. Here I am trying to explain how the test automation pyramid helps how to TDD web services.

Test-first web services?

Can we do test-first development on web services? or is test-first webservices an oxymoron? (ok, assume each is only one word ;-) ) My simple answer is no. But the more complex answer is that you have to accept some conditions to make it worthwhile. These conditions to me are important concepts that lead to do test-first rather than the other way around.

One condition is that my own domain concepts remain clean. To keep it clean this means I keep the domain from the web service from being within my domain. For example, if you have a look at the samples demonstrating how to use a web service the service proxy and the domain is right there in the application – so for web application, you’ll see references in the code behind. This worries me. When the web service changes and the code is auto generated this new code is likely then to be littered throughout that code. Another related condition is that integration points should come through my infrastructure layer because it aids testability. So, if I can get at the service reference directly I am in most cases going to have an ad hoc error handling, logging and domain mapping strategy.

Another condition is that there is an impedance mismatch between our domain the service domain. We should deal with this mismatch as early as possible and as regularly as possible. We also should deal with these issues test-first and in isolation from the rest of the code. In practice, this is a mapper concern and we have vast array of options (eg automapper library, LINQ). These options are likely to depend on the complexity of the mapping. For example, if we use WSE3 bindings then we will be mapping from a XML structure into an object. Here we’ll most likely do the heavy lifting with a XML parser such as System.Linq.Xml. Alternatively, if we are using the ServiceModel bindings then we will be mapping object to object. If these models follow similar conventions we might get away with automapper and if not we are likely to roll our own. I would suggest if you are rolling your own that the interface of automapper might be nice to follow though (eg Mapper.Map<T, T1>(data)).

I think there is a simple rule of thumb with mapping domains: you can either deal with the complexity now or later. But, regardless, you’ll have to deal with the intricacies at some point. Test-first demands that you deal with them one-at-a-time and now. Alternatively, you delay and you’ll have to deal with at integration time and this often means that someone else is finding them – and in its worst case, it is in production! I am always surprised that how many “rules” there are when mapping domains and also how much we can actually do before we try to integrate. I was just working with a developer who has done a lot of this type of integration work but never test-first. As we proceeded test-first, she starting reflecting on how much of this mapping work that she usually did under test conditions could be moved forward into the development. On finishing that piece of work we also we surprised how many tests were required to do the mapping – a rough count was 150 individual tests across 20 test classes. This was for mapping two similar domains each with 5 domain models.

What code do you really need to write?

So let’s say that you accept that you don’t want to have a direct reference to the client proxy, what else it is needed? Of course, the answer is it depends. It depends on:

  • client proxy generated (eg WSE3 vs ServiceModel): when using the client proxy the WSE3 will require a little more inline work around, say, Proxy, SetClientCredential methods whereas the ServiceModel can have it inline or be delegated to the configuration file
  • configuration model (eg xml (app.config) vs fluent configuration (programatic)): you may want to deal with configuration through a fluent configuration regardless of an app.config. This is useful for configuration checking and logging within environments. Personally, the more self checking you now for configuration settings the easier code will be to deploy through the environments. Leaving configuration checking and setting to solely to operations people is good example of throwing code over the fence. Configuration becomes someone else’s problem.
  • reference data passed with each request: Most systems require some form reference data that exists with each and every request. I prefer to avoid handling that at the per request level but rather when service instantiating. This information is less likely to the credential information.
  • security headers: you may need to add security headers to your payload. I forget which WS* standard this relates to but it is a strategy just like proxies and credentials that needs to be catered for. WSE3 and ServiceModel each have their own mechanisms to do this.
  • complexity of domain mappings: you will need to call the mapping concern to do this work but should only be a one-liner because you have designed and tested this somewhere else. It is worth noting the extent of difference though. With simple pass through calls some mappings are almost a simple value. Take for example, a calculation service upon return may be a simple value. However, with domain synching the domain mapping are somewhat a complex set of rules to get the service to accept your data. Actually, I often
  • error handling strategy: we are likely to want to catch exceptions and throw our kind so that we capture further up in the application (eg UI layer). With the use of lambdas this is straightforward and clean to try/catch method calls to the service client.
  • logging strategy particularly for debugging: you are going to need to debug payloads at some stage. Personally, I hate stepping through and that doesn’t help outside of development environments. So, a good set of logging is needed to. I’m still surprised how often code doesn’t have this.

Test automation pyramid

Now that we know what code we need to write, what types of tests are we likely to need? If you are unfamiliar with the test automation pyramid or my specific usage see test automation pyramid review for more details.

System Tests

  • combine methods for workflow
  • timing acceptance tests

Integration Tests

  • different scenarios on each method
  • exception handling (eg bad credentials)

Unit Tests

  • Each method with mock (also see mocking webservices)
  • exception handling on each method
  • mapper tests

More classes!

Without going into implementation details, all of this means that there is a boiler plate of likely classes. Here’s what I might expect to see in one of my solutions. A note on conventions: a slash ‘/’ denotes folder rather than file; <Service>\<Model>\<EachMethod> is specific to your project and indicates that there is likely to be one or more of that type; names with a .xml are xml and all others if not folders are .cs files.

  Core/
    ServiceReference/
      I<Service>Client
      <Service>Exception
    Domain/
      <Model>
      ServiceReference/
        Credential<Model>
  
  Infrastructure/
    Service References/                  <-- auto generated from Visual Studio ServiceModel
    Web Reference/                       <-- auto generated from Visual Studio WSE3
    ServiceReference/
      <Service>Client
      Mapper/
        <Model>Mapper
  
  Tests.System/
    Acceptance/
      ServiceReference/
        <Service>/
          WorkflowTest
          TimingTest
    Manual/
      ServiceReference/
         <Service>-soap-ui.xml
  
  Tests.Integration/
    ServiceReference/
      <Service>/
        <Method>Test
  
  Tests.Unit/
    ServiceReference/
      <Service>/
        ConstructorTests
        <Method>Test
        ExceptionsTest
        Mappers/
          <Model>Test
        Security/
          CredentialTest                  <-- needed if header credential
    Fixtures/
      ServiceReference/
        <Service>/
          Mock/Fake<Service>Client        
          Credentials.xml
          Request.xml                     <-- needed if WSE3
          Response.xml                    <-- needed if WSE3
      Domain/
        <Model>ObjectMother
        ServiceReference/
          Credential<Model>ObjectMother   <-- needed if header credential

That’s a whole lot more code!

Yup, it is. But each concern and test is now separated out and you can work through them independently and systematically. Here’s my point, you can deal with these issues now and have a good test bed so that when changes come through you have change tolerant code and know you’ve been as thorough as you can be with what you presently know. Or, you can deal with it later at integration time when you can ill afford to be the bottle neck in the high visible part of the process.

Potential coding order

Now, that we might have a boilerplate of options, I tend to want to suggest an order. With the test automation pyramid, I suggest to design as a sketch and domain model/services first. Then system test stub writing, then come back through unit and integration tests before completing\implementing the system test stubs. Here’s my rough ordered list:

  1. have a data mapping document – Excel, Word or some form table is excellent often provided by BAs – you still have to have some analysis of differences between your domain and their’s
  2. generate your Service Reference or Web Reference client proxy code – I want to see what the models and endpoints – I may play with them via soapUI – but usually leave that for later if at all needed
  3. write my system acceptance tests stubs – here I need to understand how these services fit into the application and what the end workflow is going to be. For example, I might write these as user story given/when/then scenarios. I do not try and get these implement other than compiling because I will come back to them at the end. I just need a touch point of the big picture.
  4. start writing unit tests for my Service Client – effectively, I am doing test-first creation of my I<Service>Client making sure that I can use each method with an injected Mock/Fake<Service>Client.
  5. unit test out my mappers – by now, I will be thinking about the request/response cycle and will need to creating ObjectMothers to be translated into the service reference domain model to be posted. I might be working in the other direction too – which is usually a straightforward mapping but gets clearer once you start integration tests.
  6. integration test on each method – once I have a good set of mappers, I’ll often head out to the integration point and check out how broken my assumptions about the data mapping are. Usually, as assumptions break down I head back into the unit test to improve the mappers so that the integration tests work – this is where the most work occurs!
  7. at this point, I now need good DEBUG logging and I’ll just ensure that I am not using the step-through debugger but rather good log files at DEBUG level.
  8. write system timing tests because sometimes there is an issue that the customer needs to be aware of
  9. system tests that can be implemented for the unit/integration tests for the method implements thus far
  10. add exception handling unit tests and code
  11. add credential headers (if needed)
  12. back to system tests and finish off and implement the original user stories
  13. finally, sometimes, we need to create a set of record-and-replay tests for other people testing. SoapUI is good for this and we can easily save them in source for later use.

Some problems

Apart from having presented any specific code, here are some problems I see:

  • duplication: your I<Service>Client and the proxy generated once is probably very similar with the difference it that your one returns your domain model objects. I can’t see how to get around this given your I<Service>Client is an anti-corruption class.
  • namespacing/folders: I have suggested ServiceReference/<Service>/ as folder structure. This is a multi-service structure so you could ditch the <Service> folder if you only had one.
  • Fixtures.ServiceReference.<Service>.Mock/Fake<Service>Client: this implementation is up to you. If you are using ServiceModel then you have an interface to implement against. If you are using WSE3 you don’t have an interface – try extending through @partial@s or wrapping with another class.

Test Automation Pyramid – review

August 4th, 2010 No comments

Test Automation Pyramid

This review has turned out to be a little longer than I expected. I have been using my own layering of the test automation pyramid and the goal was to come back and check it against the current models. I think that my usage is still appropriate but because it is quite specific it wouldn’t work for all teams. So I might pick and choose between the others when needed. If you write line of business web applications which tend to have integration aspects then I think this might be useful. The key difference from the other models is my middle layer in the pyramid is perhaps a little more specific as I try and recapture the idea of integration tests within the context of xUnit tests. Read on if you are interested – there are some bullet points at the end if you are familiar with the different models. Just a disclaimer that reference point is .Net libraries and line-of-business web applications because of my work life at the moment. I am suggesting something slightly different than say this approach but doesn’t seem that different to this recent one and you can also find it implicitly in Continuous Delivery by Farley and Humble.

A little history

The test automation pyramid has been credited to Mike Cohn of Mountain Goat Software. It is a rubric concerning three layers that is foremostly concerned with how we think about going about testing – what types of tests we run and how many of each. While various authors, have modified the labels, it has changed very little and I suspect that is because its simplicity captures people’s imagination. In many ways, it makes a elementary point: automating your manual testing approach is not good enough particularly if that testing was primarily through the GUI. Mike explains that this was the starting point for the pyramid, he says,

The first time I drew it was for a team I wanted to educate about UI testing. They were trying to do all testing through the UI and I wanted to show them how they could avoid that. Perhaps since that was my first use it, that’s the use I’ve stuck with most commonly.

Instead, change the nature of the tests – there is a lot of common sense to this. I’ll take for instance a current web application I am working on. I have a test team that spend a lot of time testing the application through the browser UI. It is a complex insurance application that connects to the company’s main system. Testers are constantly looking at policies and risks and checking data and calculations. In using the UI, however, they have tacit knowledge of what they are looking at and why – they will have knowledge about the policy from the original system and the external calculation engine. It is this tacit knowledge and the knowledge about the context – in this case, a specific record (policy) that meets a criteria – that is difficult to automate in its raw form. Yet, each time that do a manual test in this way the company immediately looses its investment in this knowledge but allowing it to be in the head of only one person. Automating this knowledge is difficult however. All this knowledge is wrapped into specific examples found manually. The problem when you automate this knowledge is that it is example-first testing and when you are testing against data like this today’s passing test may fail tomorrow. Or even worse the test may fail the day after when you can’t remember a lot about the example! Therefore, the test automation pyramid is mechanism to get away from a reliance on brittle end-to-end testing and the desire to simply automate your manual testing. It moves towards layering your testing – making multiple types of tests, maintaining boundaries, testing relationships across those boundaries and being attentive to the data which flows.

Here’s the three main test automation pyramids out there in the wild. Take a look at these and then I will compare them below in a table and put them alongside my nomenclature.

Cohn

Cohn

Mezaros

Mezaros

Crispin

Crispin

Test Automation Pyramid: a comparison

	
   Cohn       Mezaros        Crispin            Suggested
   
                            (manual)          (exploratory)
     UI**      System          GUI               System
   Service    Component    Acceptance (api)    Integration
    Unit        Unit        Unit/Component         Unit  

** in a recent blog post and subsequent comments Mike agrees with others that the UI layer may be better called the “End-to-End” tests. Mike points out that when he started teaching this 6-7 years ago the term UI was the best way to explain to people the problem. And that problem being that automating manual tests meaning automating GUI tests and that the result is (see Continuous Testing: Building Quality into Your Projects):

Brittle. A small change in the user interface can break many tests. When this is repeated many times over the course of a project, teams simply give up and stop correcting tests every time the user interface changes.
Expensive to write. A quick capture-and-playback approach to recording user interface tests can work, but tests recorded this way are usually the most brittle. Writing a good user interface test that will remain useful and valid takes time.
Time consuming. Tests run through the user interface often take a long time to run. I’ve seen numerous teams with impressive suites of automated user interface tests that take so long to run they cannot be run every night, much less multiple times per day.

Related models to test automation pyramid

Freeman & Price: while not a pyramid explicitly, they build a hierarchy of tests that correspond to some nested feedback loops:

Acceptance: Dos the whole system work?
Integration: Does our code work against the code we can’t change?
Unit: Do our objects do the right thing, are they convenient to work work?

Stephens & Rosenberg: Design Driven Testing: Test Smarter, Not Harder have four level approach in which the tests across the layers increase in granularity and size as you move down to unit tests – and business requirement tests are seen as manual. Although their process of development is closely aligned to the V model of development (see pp.6-8)

four principal test artifacts: unit tests, controller tests, scenario tests, and business requirement tests. As you can see, unit tests are fundamentally rooted in the design/solution/implementation space. They’re written and “owned” by coders. Above these, controller tests are sandwiched between the analysis and design spaces, and help to provide a bridge between the two. Scenario tests belong in the analysis space, and are manual test specs containing step-by-step instructions for the testers to follow, that expand out all sunny-day/rainy-day permutations of a use case. Once you’re comfortable with the process and the organization is more amenable to the idea, we also highly recommend basing “end-to-end” integration tests on the scenario test specs. Finally, business requirement tests are almost always manual test specs; they facilitate the “human sanity check” before a new version of the product is signed off for release into the wild.

The pyramid gets it shape because:

  • Large numbers of very small unit tests – set a foundation on simple tests
  • Smaller number of functional tests for major components
  • Even fewer tests for the entire application & workflow

With a cloud above because:

  • there are always some tests that need not or should not be automated
  • you could just say that system testing requires manual and automated testing

General rules:

  • need multiple types of tests
  • tests should be complimentary
  • while there looks like overlap, different layers test at a different abstraction
  • it is harder to introduce a layered strategy

So, as Mezaros points out, multiple types of tests aware of boundaries are important:

Mezaros - Why We Need Multiple Kinds of Tests

Compare and Contrast

Nobody disagrees that the foundation are unit tests. Unit tests tell us exactly which class/method is broken. xUnit testing is dominant here as an approach and this is a well documented approach regardless of classical TDD, mocking TDD or fluent-language style BDD (eg should syntax helpers, or given/when/then or given/it/should). Regardless of the specifics, each test tests one thing, is written by the developer, is likely to have a low or no setup/teardown overhead, should run and fail fast and is the closest and most fine grained view of the code under test.

There are some slight differences that may be more than nomenclature. Crispin includes component testing in the base layer whereas Mezaros puts it as the next layer (which he defines the next layer as functional tests of major components). I’m not that sure that in practice they would look different. I suspect Crispin’s component testing would require mocking strategies to inject dependencies and isolate boundaries. Therefore, the unit/component of Crispin suggests to me that the unit test can be on something larger than a “unit” as long as it still meets the boundary condition of being isolated. One question I would have here for Crispin is in the unit tests would you include for instance tests with a database connection? Does this count as a component or an api?

The second layer starts to see some nuances but tends toward telling us which components are at fault. These are tested by explicitly targeting the programatic boundaries of components. Cohn called his services while Crispin, API. Combine that with Mezaros’ component and all are suggesting that there is size of form within the system – a component, a service – is bigger than the unit but smaller than the entire system. For Mezaros, component tests would also test complex business logic directly. For Cohn, the service layer is a the api (or logical layer) in between the user interface and very detailed code. He says, “service-level testing is about testing the services of an application separately from its user interface. So instead of running a dozen or so multiplication test cases through the calculator’s user interface, we instead perform those tests at the service level”. So what I think we see in the second layer is expedient testing in comparison to the UI. Although what I find hard to see is whether or not this level expects dependencies must be running within an environment. For example, Mezaros and Crispin both suggest that we might be running FIT which often require a working environment in place (because of the way they are written). In practice, I find this a source of confusion that is worth considering. (This is what I return to in a clarification of integration testing.)

On to the top layer. This has the least amount of tests and works across the entire application and focuses often on workflow. Cohn and Crispin focus on the UI to try and keep people about keeping these tests small. Mezaros makes a more interesting move to subsume that UI testing into system testing. Inside system tests he also uses acceptance testing (eg FIT) and manual tests. Crispin also account for manual testing in the cloud above the pyramid. Either way manual testing is still an important part of a test automation strategy. For example the use of record-and-replay tools, such as, selenium or WATIR/N have made browser-based testing UI testing easier. You can use these tools to help script browser testing and then you have the choice whether to automate or not. However, UI testing still suffers from entropy and hence is the hardest to maintain over time – particularly if they are dependent data.

Here are some of the issues I find coming up out of these models.

  • the test automation pyramid requires a known solution architecture with good boundary definition that many teams haven’t made explicit (particularly through sketching/diagrams)
  • existing notions of the “unit” test are not as subtle as an “xUnit” unit test
  • people are too willing to accept that some boundaries aren’t stub-able – often pushing unit tests into the service/acceptance layer
  • developers are all too willing to see (system) FIT tests as the testers (someone else’s) responsibility
  • it is very hard to get story-test driven development going and the expense of FIT quickly outweighs benefits
  • we can use xUnit type tests for system layer tests (eg xUnit-based StoryQ or Cucumber)
  • you can combine different test runners for the different layers (eg xUnit, Fitnesse and Selenium and getting a good report)
  • developers new to TDD and unit testing tend to use test-last strategies that are of the style best suited for system layer tests – and get confused why they see little benefit

Modified definition of the layers

Having worked with these models, I want to propose a slight variation. This variation is in line with Freeman and Pryce.

  • Unit – a test that has no dependencies (do our objects do the right thing, are they convenient to work with?)
  • Integration – a test that has only one dependency and tests one interaction (usually, does our code work against code that we can’t change?)
  • System – a test that has one-or-more dependencies and many interactions (does the whole system work?)
  • Exploratory – test that is not part of the regression suite, nor should have traceability requirements

Unit tests

There should be little disagreement about what good xUnit tests: it is well documented although rarely practised in line-of-business applications. As Michael Feathers says,

A test is not a unit test if:
* it talks to the database
* it communicates across the network
* it touches the file system
* it can’t run at the same time as any of your other unit tests
* you have to do special things to your environment (such as editing config files) to run it
Tests that do these things aren’t bad. Often they are worth writing, and they can be written in a unit test harness. However, it is important to keep them separate from true unit tests so that we can run the unit tests quickly whenever we make changes.

Yet often unit tests are difficult because it works against the grain of the application framework. I have documented my approach elsewhere for an MVC application that I not only unit test the usual suspects of domain models, services and various helpers such as configurators or data mappers but also my touch points with the framework such as routes, controllers, action redirects (see MvcContrib which I contributed to with the help of a colleague). I would expect unit test of this kind to be test-first because it about design with a touch of verification. If you can’t test-first this type of code base it might be worthwhile spending a little understanding what part of the system you are unable understand (sorry mock out!). Interestingly, I have found that I can test-first BDD-style parts of my GUI when in the browser and using jQuery and JsSpec (or QUnit) (here’s an example) – you have to treat javascript as first-class citizen at this point so here’s a helper for generating the scaffolding for the jquery plugin.

So I have a check list unit tests I expect to find in a line-of-business unit test project:

  • model validations
  • repository validation checking
  • routes
  • action redirects
  • mappers
  • any helpers
  • services (in the DDD sense)
  • service references (web services for non-Visual Studio people)
  • jquery

Integration tests

Integration tests may have a bad name if you agree with JB Rainsberger that integration tests are a scam because you would see integration tests as exhaustive testing. I agree with him on that count so I have attempted to reclaim integration testing and use the term quite specifically in the test automation pyramid. I use it to mean to test only one dependency and one interaction at a time. For example, I find this approach helpful with testing repositories (in the DDD sense). I do integration test repositories because I am interested in my ability to manage an object lifecycle through the database as the integration point. I therefore need tests that prove CRUD-type functions through an identity – Eric Evans (DDD, p.123) has a diagram of an object’s lifecycle that is very useful to show the links between an object lifecycle and the repository.

Title

Using this diagram, we are interested in the identity upon saving, retrieval, archiving and deletion because these touch the database. For integration tests, we are likely to less interested in creation and updates because they wouldn’t tend to need the dependency of the database. These then should be pushed down to the unit tests – such as validation checking on update.

On the web services front, I am surprised how often I don’t have integration tests for them because the main job of my code is (a) test mappings between the web service object model and my domain model which is a unit concern or (b) the web services use of data which I would tend to make a system concern.

My checklist in the integration tests is somewhat less than unit tests:

  • repositories
  • service references (very lightly done here just trying to think about how identity is managed)
  • document creation (eg PDF)
  • emails

System tests

System testing is the end-to-end tests and it covers any number of dependencies to satisfy the test across scenarios. We then are often asking various questions in the system testing, is it still working (smoke)? do we have problems when we combine things and they interact (acceptance)? I might even have a set of tests are scripts for manual tests.

Freeman and Pryce (p.10) write:

There’s been a lot of discussion in the TDD world over the terminology for what we’re calling acceptance tests: “functional tests”, “system tests.” Worse, our definitions are often not the same as those used by professional software testers. The important thing is to be clear about hour intentions. We use “acceptance tests” to help us, with the domain experts, understand and agree on what we are going to build next. We also use them to make sure that we haven’t broken any existing features as we continue developing. Our preferred implementation of the “role” of acceptance testing is to write end-to-end tests which, as we just noted, should be as end-to-end as possible; our bias often leads us to use these interchangeably although, in some cases, acceptance tests might not be end-to-end.

I find smoke tests invaluable. To keep them cost effective, I also keep them slim and often throw some away once I have found that the system has stabilised and the cost of maintaining is greater than the benefit. Because I tend to write web applications my smoke tests are through the UI and I use record-and-replay tools such as selenium. My goal with these tests is to target parts of the application that are known to be problematic so that I get as earlier a warning as possible. These types of system tests must be run often – and often here means starting in the development environment and then moving through the build and test environments. But running in all these environment comes with a significant cost as each tends to need its own configuration of tools. Let me explain because the options all start getting complicated but may serve as a lesson for tool choice. In one project, we used selenium. So we using selenium IDE in Firefox for record and replay. Developers use this for their development/testing (and devs have visibility of these tests in the smoke test folder). These same scripts were deployed through onto the website so that testing in testing environments could be done through the browser (uses selenium core and located in the web\tests folder – same tests but available for different runners). On the build server, the tests through seleniumHQ (because we ran Hudson build server) – although in earlier phases we had actually used selenium-RC with NUnit but I found that in practice hard to maintain [and we could have used Fitnesse alternatively and selenese!]. As an aside, we found that we also used the smoke tests as our demo back to client so we had them deploy on the test web server so that run through the browser (using the selenium core). To avoid maintenance costs, the trick was to write them specific enough to test something real and general enough not to break over time. Or if they do break it doesn’t take too long to diagnose and remedy. As an aside, selenium tests when broken into smaller units use a lot of block-copy inheritance (ie copy and paste). I have found that this is just fine as long as you use find-and-replace strategies for updating data. For example, we returned to this set of tests two years later and they broke because of date specific data that had expired. After 5 minutes I had worked out that the tests with a find-and-replace for a date would fix the problem. I was surprised to tell the truth!

Then there are the acceptance tests. These are the hardest to sustain. I see them as having two main functions: they have a validation function because they link customer abstractions of workflow with code (validation). The other is that they ensure good design of the code base. A couple of widely used approaches to acceptance type tests are FIT/Fitnesse/Slim/FitLibraryWeb (or StoryTeller) and BDD-style user stories (eg StoryQ, Cucumber). Common to both is to create an additional layer of abstraction for the customer view of the system and another layer than wires this up to the system under test. Both are well documented on the web and in books so I won’t go labour the details here (just a wee note that I you can see a .net bias simply because of my work environment rather than personal preference). I wrote about acceptance tests and the need for a fluent inteface level in a earlier entry:

I find that to run maintainable acceptance tests you need to create yet another abstraction. Rarely can you just hook up the SUT api and it works. You need setup/teardown data and various helper methods. To do this, I explicitly create “profiles” in code for the setup of data and exercising of the system. For example, when I wrote a Banner delivery tool for a client (think OpenX or GoogleAds) I needed to create a “Configurator” and an “Actionator” profile. The Configurator was able to create a number banner ads into the system (eg html banner on this site, a text banner on that site) and the Actionator then invoked 10,000 users on this page on that site. In both cases, I wrote C# code to do the job (think an internal DSL as a fluent interface) rather than say in fitnesse.

I have written a series of blogs on building configurators through fluent interfaces here.

System tests may or may not have control over setup and teardown. If you do, all the better. If you don’t you’ll have to work a lot harder in the setup phase. I am currently working on a project which a system test works across three systems. The first is the main (legacy – no tests) system, the second a synching/transforming system and the third which is the target system. (Interestingly, the third only exists because no one will tackle extending the first and the second only because the third needs data from the first – it’s not that simple but you get the gist.) The system tests in this case become one less of testing that the code is correct. Rather it is more one of does the code interact with the data from the first system in ways that we would expect/accept? As a friend pointed out this becomes an issue akin to process control. Because we can’t setup data in the xUnit fashion, we need to query the first system directly and cross reference against third system. In practice, the acceptance test helps us refine our understanding of the data coming across – we find the implicit business rules and in fact data inconsistencies, or straight out errors.

Finally manual tests. When using Selenium my smoke tests are my manual tests. But in the case of web services or content gateways, there are times that I want to just be able to make one-off (but repeatable) tests that require tweaking each time. I may then be using these to inspect results too. These make little sense to invest in automation but they do make sense to have checked into source leaving it there for future reference. Some SoapUI tests would fit this category.

A final point I want to make about the nature of system tests. They are both written first and last. Like story-test driven development, I want acceptance (story tests) tests written into the code base to begin with and then set to pending (rather than failing). I then want them to be hooked up to the code as the final phase and set to passing (eg Fitnesse fixtures). By doing them last, we already have our unit and integration tests in place. In all likelihood by the time we get to the acceptance tests we are now dealing with issues of design validation in two senses. One, that we don’t understand the essential complexity of the problem space – there are things in there that we didn’t understand or know about that are now coming into light and this suggests that we will be rewriting parts of the code. Two, that the design of code isn’t quite as SOLID as we thought. For example, I was just working on an acceptance test where I had do a search for some data – the search mechanism I found had constraints living the SQL code that I couldn’t pass in and I had to have other data in the database.

For me, tests are the ability for us as coders to have conversations with our code. By the time we have had the three conversations of unit, integration and acceptance, I find that three is generally enough to have a sense that our design is good enough. The acceptance is important because helps good layering and enforce that crucial logic doesn’t live in the UI layer but rather down in the application layer (in the DDD sense). The acceptance tests, like the UI layer, are both clients of the application layer. I often find that the interface created in the application layer not surprisingly services the UI layer that will require refactoring for a second client. Very rarely is this unhelpful and often it finds that next depth of bugs.

My system tests checklist:

  • acceptance
  • smoke
  • manual

Exploratory

The top level system and exploratory testing is all about putting it together so that people across different roles can play with the application and use complex heuristics to check its coding. But I don’t think the top level is really about the user interface per se. It only looks that way because the GUI is most generalised abstraction that we believe that customers and testers believe that they understand the workings of the software. Working software and the GUI should not be conflated. Complexity at the top-most level is that of many dependencies interacting with each other – context is everything. Complexity here is not linear. We need automated system testing to follow critical paths that create combinations or interactions that we can prove do not have negative side effects. We also need exploratory testing which is deep, calculative yet ad hoc that attempts to create negative side effects that we can then automate. Neither strategy aspires for illusive, exhaustive testing – as JB Rainsberger argues – which is the scam of integration testing.

General ideas

Under this layering, I find a (dotnet) solution structure of three projects (.csproj) is easier to maintain and importantly the separation helps with the build server. Unit tests are built easily, run in place and fail fast (seconds not minutes). The integration tests require more work because you need the scripting to migrate up the database and seed data. These obviously take longer. I find the length of these tests and their stability are a good health indicator. If they don’t exist or are mixed up with unit tests – I worry a lot. For example, I had a project that these were mixed up and the tests took 45 minutes to run so tests were rarely run as a whole (so I found out). As it turned out many of the tests were unstable. After splitting them, we got unit tests to under 30 seconds and then started the long process of reducing integration test time down which we got down to a couple of minutes on our development machines. Doing this required splitting out another group of tests that were mixed up into the integration tests. These were in fact system tests that tested the whole of the system – that is, there are workflow tests that require all of the system to be in place – databases, data, web services, etc. These are the hardest to setup and maintain in a build environment. So we tend to do the least possible. For example, we might have selenium tests to smoke test the application through the GUI but it is usually the happy path as fast as possible that exercises that all are ducks are lined up.

Still some problems

Calling the middle layer integration testing is still problematic and can cause confusion. I introduced integration because it is about the longer running tests which required the dependency of an integration point. When explaining to people, they quickly picked up on the idea. Of course, they also bring their own understanding of integration testing which is closer to the point that Rainsberger is making.

Some general points

  • a unit test should be test first
  • an integration test with a database tests is usually the test of the repository to manage identity through the lifecycle of a domain object
  • a system test must cleanly deal with duplication (use other libraries)
  • a system test assertion is likely to be test-last
  • unit, integration and system tests should each have their own project
  • test project referencing should cascade down from system to integration through to the standalone unit
  • the number and complexity of tests across the projects should reflect the shape of the pyramid
  • the complexity of the test should be inversely proportional to the shape of the pyramid
  • unit and integration test object mothers are subtly different because of identity
  • a system test is best for testing legacy data because it requires all components in place
  • some classes/concerns might have tests split across test projects, e a repository class should have both unit and integration tests
  • framework wiring code should be pushed down to unit tests, such as, in mvc routes, actions and redirects,
  • GUI can sometime be unit tested, eg jQuery makes unit testing of browser widgets realistic
  • success of pyramid is an increase in exploratory testing (not an increase in manual regressions)
  • the development team skill level and commitment is the greatest barrier closely followed by continuous integration

Some smells

  • time taken to run tests is not inversely proportional to the shape of the pyramid
  • folder structure in the unit tests that in no way represents the layering of the application
  • only one test project in a solution
  • acceptance tests written without an abstraction layer/library

References

  1. The Forgotten Layer of the Test Automation Pyramid
  2. From backlog to concept
  3. Agile testing overview
  4. Standard implementation of test automation pyramid
  5. Freeman and Price, 2010, Growing Object-Oriented Software, Guided by Tests
  6. Stephens & Rosenberg, 2010, Design Driven Testing: Test Smarter, Not Harder

architects-and-agile

August 1st, 2010 No comments

This was just a nice little presentation from infoq on agilists and architects that I made some notes on.

Architects:

Enterprise, Data, Technical, Integration Architects: usually disconnected from the application (cf application architect)
– shared across business unit or development stream
– against reckless proliferation of tools
– often charged with stability

How does self-empowered teams work with enterprise standards and reviews?

Even good people have been messed up by bad ideas. What is in agile for architects?
– transparency and visibility into progress
– up to date specification of functionality (through tests and also self checking)
– results in adaptable code (reuse/harvest after the fact rather than planned)

How to make it work?
– Architects as customers
– Architects as team members … means they are customers and techies

Architects are centrally to understanding how all the business holds together and can help with technical debt

Making estimates based on clean code and that on legacy code

Categories: Uncategorized Tags:

a-quick-note-on-brooks

August 1st, 2010 No comments

Accidental and essential complexity in software

Brooks said that he believed the hard part of building software to be the specification, design and testing of the conceptual construct of software, not the labour of representing it, and testing the fidelity of that representation.

This is not surprising that in the shift from writing software to building software that we have separated out design from building, building from testing and testing from design. This process we would call waterfall and often we see people having a single-pass version in their mind’s eye.

Iterative and incremental is a shift from building software to growing software. That is, first make the system run and that add to it bit by bit while always keeping it a working system. Brooks points out that he finds that teams can grow much more complex entities in four months than they can build.

Categories: Uncategorized Tags:

gerald-mezaros-concept-to-backlog

August 1st, 2010 No comments

Notes from Gerald Mezaros on Concept to Backlog

I was just re-reading Gerald Mezaros of the xunittestpatterns from his pdf of from concept to backlog
and also having a look at his presntation on infoq.

I was using him to think through an approach with a client. These are my notes from a while back.

The client

The current situation is:

– that the product has gone to business case
– there are a list of major features and put into themes
– overall costing/effort estimates
– business staffing/engagement with vendors (skills list)
– end delivery date

We already have:
– existing product (architecture, infrastructure)
– set of tests/test strategy based on the test automation pyramid (fitnesse, selenium, nunit)

There is no:
– is there really a product design?
– release plan
– no user stories
– no story tests/acceptance criteria

Risks:
– is time going to be an issue with new features?
– will the existing architecture still hold?
– what the introduction of new platform (eg Sitecore)
– integration tests take 15 minutes to run (aka functional tests layer)

Notes from the talk

BDUF vs LRM (last responsible moment)
– making better decision later

Product Envisioning
Product Planning
Product Execution

Elevator Statement (Moore – Crossing the Chasm)

For (target customers)
who are dissatisfied (with current alternatives)
our product is a (new product category)
that provides (key problem-solving capabilities)
unlike (the product alternativ)
we have assembled (key “whole prodct” for features for our specific application)

Tensions:

- just in time analysis so that we don’t have inventory as waste

What have we done in our current project?

Main Features -
Need:
– product design (eg screens, data models or messaging protocols – not software design) – barely sufficient to something the customer loves
– product behaviour (featurs and definition of scope)

We have:
– some high level architecture (dotnet, test strategy)
– do we need more of components?

What about a release plan?
– date & features

Do we want use cases are deliverables of the project (yes – current implementation has)

Release Planning: (manage scope to fit budget)
– needing priorities around user stories (allow us some wiggle room)
– we have committed to the features, what about the stories? (that we don’t have)

Categories: Uncategorized Tags:

subversion-conventions-notes

August 1st, 2010 No comments

Moving to Subversion notes:

I have included my copy of the pragmatic programmers (PP) Subversion book. Everything I would suggest is covered well enough in this book and I will refer to pages in this. I have also attached the free svn red book from oreilly.

My assumption in these notes is that you haven’t used SVN or are familiar with its strategies. In perforce, you have been using a mainline strategy (as documented in the wiki). Subversion does not use this strategy. You will need to get familiar with its approach. PP p54 says:

“Developers should use branches to separate the main
line of development from code lines that have dif ferent
life cycles, such as release branches and major code
experiments. Tags are used to identify significant points
in time, including releases and bug fixes.”"

Access to the repository:

For read-only browsing access in a browser this will allow you to look at all the projects.

  • https://domain/svn/

For real work, checking out and committing with an SVN client:

  • https://domain/svn/project/trunk

SVN Clients:

  • TortoiseSVNGUI integration into explorer – http://tortoisesvn.net/downloads
  • AnkhSVN – integration into Visual Studio – http://ankhsvn.open.collab.net/
  • VisualSVN – integration into Visual Studio – http://www.visualsvn.com/visualsvn/
  • SVN commandline – http://subversion.tigris.org/getting.html#windows

Personally, I use tortoise and commandline and then also visual studio integration.

Checking out:

Everyone has their own naming conventions. Personally, when dealing with a trunk, I often check to a folder name with hyphen trunk in it. The name of the folder isn’t that important in the long run because it can be renamed.

 cd /path/to/src
 svn co https://domain/svn/project/trunk project-trunk

Committing:

svn commit -m "my check in" (PP p.91)

But, note that you will have already need to have added a file. GUI client will help with this.

Adding might look like this in the command line: svn add . (PP p.63)

Organising the repository

  1. There are also three folders for each project trunk, tags, branches (PP p.108)
  2. We will NOT be running a multiple project repository because we will be using multiple repositories when needed (PP p.110)
  3. Repositories can be linked via svn:externals when and if needed (PP p.135)

Using tags and branches

This is all covered in Chapter 9 (PP p.111) including creating release branches, fixing bugs, etc.

Naming conventions

My suggestion is to use PP conventions (p. 114)

Thing to Name     Name Style            Examples 
Release branch    RB-rel                RB-1.0 
                                        RB-1.0.1a 
Releases          REL-rel               REL-1.0 
                                        REL-1.0.1a 
Bug fix branches  BUG-track             BUG-3035 
                                        BUG-10871 
Pre-bug fix       PRE-track             PRE-3035 
                                        PRE-10871 
Post-bug fix      POST-track            POST-3035 
                                        POST-10871 
Developer experiments TRY-initials-desc TRY-MGM-cache-pages 
                                        TRY-MR-neo-persistence
Categories: Uncategorized Tags:

perforcemigration-notes

August 1st, 2010 No comments

Instructions:
http://www.perforce.com/perforce/doc.current/user/p4perlnotess.txt

But, actually:

% wget ftp://ftp.perforce.com/perforce/r09.1/bin.tools/p4perl.tgz (extract to /path/to/tmp/p4perl)

extract to location /path/to/tmp/

Get:

wget ftp://ftp.perforce.com/perforce/r09.1/bin.darwin80u/p4api.tgz (extract to /path/to/tmp/p4api)

wget ftp://ftp.perforce.com/perforce/r09.1/bin.darwin80u/p4

extract to location
/path/to/tmp/

p4 todd$ ls
p4 p42svn.pl p4api p4api.tgz p4perl p4perl.tgz

cd p4perl

perl Makefile.PL –apidir ../p4api (from p4api.tgz)
make

make tests
sudo make install

Password:

% perl -MP4 -e “print P4::Identify()”

Perforce – The Fast Software Configuration Management System.
Copyright 1995-2009 Perforce Software. All rights reserved.
Rev. P4PERL/DARWIN9X86/2009.1.GA/205670 (2009.1 API) (2009/06/29).

Right now I am ready to do the migration: (I used version 91)

wget http://p42svn.tigris.org/source/browse/*checkout*/p42svn/trunk/p42svn.pl?revision=91

% perl p42svn.pl usage

If you get errors then you have installation problems with perl and spent time sorting it out. (I had major problems that took me a couple of hours to sort out including updating, upgrading, installing, arrrggghhh)

perl p42svn.pl –user yours –password secret –port xx.xx.xx.xx:1666 –branch “//depot/project”=trunk > svn.dump

Now, move that file to the windows box

Backon the windows box:

svnadmin create c:\Repositories\project
svnadmin load c:\Repositories\project < svn.dump

Categories: Uncategorized Tags:

jquery-bdd

August 1st, 2010 No comments

Notes from a jQuery session

Structure of session:

  • Jump into an example
  • Build some new functionality
  • Come back and see the major concepts
  • Look at what is actually needed to treat javascript as a first-class citizen
  • Different testing libraries

Quick why a library in javascript?

  • cross-browser abstraction (dynduo circa 1998 was still hard!)
  • jQuery, MooTools, Extjs, Prototype, GWT, YUI, Dojo, …
  • I want to work with DOM with some UI abstractions – with a little general purpose
  • simple HTML traversing, event handling
  • also functional, inline style
  • I want plugin type architecture

Story plugin demo

  • Viewer of StoryQ results
  • StoryQ produces XML, this widget gives the XML a pretty viewer

A screenshot and demo

What would your acceptance criteria be? What do you think some of the behaviours of this page are?

Acceptance:

eg should display the PROJECT at the top with the number of tests

Behaviours:

think in terms of themes: data, display, events

The tests … what does the application do?

  • run the tests and see the categories
  • data loading: xml
  • display: traversing xml and creating html
  • events: click handlers

Let’s build some new functionality!

Goal: add “Expand All | Contract All | Toggle” functionality to the page

Acceptance:

  • The user should be able to expand, collapse or toggle the tree

Specs

Display:
* should show “Expand All | Contract All | Toggle”
Events:
* should show all results when clicking expand all
* should show only top class when clicked contract all
* should toggle between all and one when clicking on toggle

Coding: Add acceptance

Add Display specs

Add Event specs

Return back to completing the Acceptance

Major aspects we covered

HTML traversing

  • I want to program akin to how I look at the page
  • I may look for: an element, a style, some content or a relationship
  • then perform an action
$(‘div > p:first’)
$(‘#mylist’).append(“
  • another item
  • “)

    Event handling

    • I want to look at page and add event at that point
    • I want to load data (ie xml or json)
    $(‘div’).click(function(){
    alert(“div clicked”)
    })
    $(‘div’).bind(‘drag’, function(){
    $(this).addClass(‘dragged’)
    })
    $(‘#results’).load(‘result.html’)
    $.get(‘result.xml’, function(xml){
    $(“user”, xml).each(function(){
    $(“
  • “).text($(this).text() .appendTo(“#mylist”))
    })
    })

    Functional style

    • almost everything in jQuery are JQuery objects
    • that returns an object
    • every method can call a jQuery object
    • that means you can chain
    • plus I want it to be short code

    $(‘‘)
    .addClass(‘indent’)
    .addClass((idx == 4) ? ‘scenario’ : ”)
    .text($(this).attr(‘Prefix’) + ‘ ‘ + $(this).attr(‘Text’))
    .append($(‘‘).text(“a child piece of text”)
    .click(function(){ $(this).addClass(‘click’)}))
    .appendTo(results)

    Plugin architecture

    • drop in a widget (including my own)
    • then combine, extend
    • help understand customisation
    • basically just work

    $(‘#tree’).treeview();
    $.ajax({
    url: ‘/update’,
    data: name,
    type: ‘put’,
    success: function(xml){
    $(‘#flash’).text(“successful update”).addClass(‘success’)
    }
    })

    With power and simplicity … comes responsibility

    • the need to follow conventions
      – plugins return an array
      – plugins accept parameters but have clear defaults
      – respect namespace
    • the need for structure
      – test data
      – min & pack
      – releases
    • the need to avoid mundane, time consuming tasks
      – downloading jquery latest
      – download jQuery UI
      – building and releasing packages
    • needs tests
      – I use jsspec

    Sounds like real software development?

    Treat javascript as a first-class citizen

    Give your plugin a directory structure:

    /src
    /css
    /images
    query.plugin.js
    /test
    spec_plugin.js
    acceptance_plugin.js
    specs.html
    acceptance.html
    /lib
    /jsspec
    jquery.min.js
    jquery-ui.js
    /themes
    /base
    example.html
    Rakefile
    History.txt
    README.txt

    Generate your plugin boilerplate code

    jQuery Plugin Generator
    * gem install jquery-plugin-generator

    (function($) {
    $.jquery.test = {
    VERSION: “0.0.1″,
    defaults: {
    key: ‘value’
    }
    };

    $.fn.extend({
    jquery.test: function(settings) {
    settings = $.extend({}, $.jquery.test.defaults, settings);
    return this.each( function(){
    self = this;
    // your plugin

    })
    }
    })
    })(jQuery);

    Use a build tool to do the … ah … building

    rake acceptance # Run acceptance test in browser
    rake bundles:tm # Install TextMate bundles from SVN for jQuery and…
    rake clean # Remove any temporary products.
    rake clobber # Remove any generated file.
    rake clobber_compile # Remove compile products
    rake clobber_package # Remove package products
    rake compile # Build all the packages
    rake example # Show example
    rake first_time # First time run to demonstrate that pages are wor…
    rake jquery:add # Add latest jquery core, ui and themes to lib
    rake jquery:add_core # Add latest jQuery to library
    rake jquery:add_themes # Add all themes to libary
    rake jquery:add_ui # Add latest jQueryUI (without theme) to library
    rake jquery:add_version # Add specific version of jQuery library: see with…
    rake jquery:packages # List all packages for core and ui
    rake jquery:packages_core # List versions of released packages
    rake jquery:packages_ui # List versions of released packages
    rake jquery:versions # List all versions for core and ui
    rake jquery:versions_core # List jQuery packages available
    rake jquery:versions_ui # List jQuery UI packages available
    rake merge # Merge js files into one
    rake pack # Compress js files to min
    rake package # Build all the packages
    rake recompile # Force a rebuild of the package files
    rake repackage # Force a rebuild of the package files
    rake show # Show all browser examples and tests
    rake specs # Run spec tests in browser

    Testing … acceptance and specs

    • specs: BDD style – DOM traversing and events
    • acceptance: tasks on the GUI with the packaged version (minified or packed)
    • these both server as good documentation as well
    • plus, you have demo page baked in!

    Compiling and packaging

    • Compiling javascript … hhh? … yes, if you minify or pack your code
    • gzip compression with caching and header control is probably easier though
    • packed code is HARD to debug if it doesn’t work

    Now you are ready for some development

    Different ways to test Javascript

    Understanding JavaScript Testing

    • testing for cross-browser issues – so this is useful if you are building javascript frameworks
    • Unit – QUnit, JsUnit, FireUnit,
    • Behvaiour – Screw.Unit, JSSpec, YUITest,
    • Functional with browser launching – Selenium (IDE & RC & HQ), Watir/n, JSTestDriver, WebDriver
    • Server-side: Crosscheck, env.js, blueridge
    • Distributed: Selenium Grid, TestSwarm

    Reference

    jQuery CheatSheet

    • slide 48: great explanation of DOM manipulation – append, prepend, after, bfore, wrap, replace …