Home > Uncategorized > test-automation-pyramid-and-webservices

test-automation-pyramid-and-webservices

August 24th, 2010 Leave a comment Go to comments

Using web services is easy. That is if we listen to vendors. Point to a wsdl and hey presto easy data integration. Rarely is it that easy for professional programmers. We have to be able to get through to endpoints via proxies, use credentials and construct the payloads correctly and all this across different environments. Add the dominance of point-and-click WSDL integration, many developers I talk to do really work at the code level for these types of integration points or if they do it is to the extent of passive code generators. So to suggest TDD on web services is, at best, perplexing. Here I am trying to explain how the test automation pyramid helps how to TDD web services.

Test-first web services?

Can we do test-first development on web services? or is test-first webservices an oxymoron? (ok, assume each is only one word ;-) ) My simple answer is no. But the more complex answer is that you have to accept some conditions to make it worthwhile. These conditions to me are important concepts that lead to do test-first rather than the other way around.

One condition is that my own domain concepts remain clean. To keep it clean this means I keep the domain from the web service from being within my domain. For example, if you have a look at the samples demonstrating how to use a web service the service proxy and the domain is right there in the application – so for web application, you’ll see references in the code behind. This worries me. When the web service changes and the code is auto generated this new code is likely then to be littered throughout that code. Another related condition is that integration points should come through my infrastructure layer because it aids testability. So, if I can get at the service reference directly I am in most cases going to have an ad hoc error handling, logging and domain mapping strategy.

Another condition is that there is an impedance mismatch between our domain the service domain. We should deal with this mismatch as early as possible and as regularly as possible. We also should deal with these issues test-first and in isolation from the rest of the code. In practice, this is a mapper concern and we have vast array of options (eg automapper library, LINQ). These options are likely to depend on the complexity of the mapping. For example, if we use WSE3 bindings then we will be mapping from a XML structure into an object. Here we’ll most likely do the heavy lifting with a XML parser such as System.Linq.Xml. Alternatively, if we are using the ServiceModel bindings then we will be mapping object to object. If these models follow similar conventions we might get away with automapper and if not we are likely to roll our own. I would suggest if you are rolling your own that the interface of automapper might be nice to follow though (eg Mapper.Map<T, T1>(data)).

I think there is a simple rule of thumb with mapping domains: you can either deal with the complexity now or later. But, regardless, you’ll have to deal with the intricacies at some point. Test-first demands that you deal with them one-at-a-time and now. Alternatively, you delay and you’ll have to deal with at integration time and this often means that someone else is finding them – and in its worst case, it is in production! I am always surprised that how many “rules” there are when mapping domains and also how much we can actually do before we try to integrate. I was just working with a developer who has done a lot of this type of integration work but never test-first. As we proceeded test-first, she starting reflecting on how much of this mapping work that she usually did under test conditions could be moved forward into the development. On finishing that piece of work we also we surprised how many tests were required to do the mapping – a rough count was 150 individual tests across 20 test classes. This was for mapping two similar domains each with 5 domain models.

What code do you really need to write?

So let’s say that you accept that you don’t want to have a direct reference to the client proxy, what else it is needed? Of course, the answer is it depends. It depends on:

  • client proxy generated (eg WSE3 vs ServiceModel): when using the client proxy the WSE3 will require a little more inline work around, say, Proxy, SetClientCredential methods whereas the ServiceModel can have it inline or be delegated to the configuration file
  • configuration model (eg xml (app.config) vs fluent configuration (programatic)): you may want to deal with configuration through a fluent configuration regardless of an app.config. This is useful for configuration checking and logging within environments. Personally, the more self checking you now for configuration settings the easier code will be to deploy through the environments. Leaving configuration checking and setting to solely to operations people is good example of throwing code over the fence. Configuration becomes someone else’s problem.
  • reference data passed with each request: Most systems require some form reference data that exists with each and every request. I prefer to avoid handling that at the per request level but rather when service instantiating. This information is less likely to the credential information.
  • security headers: you may need to add security headers to your payload. I forget which WS* standard this relates to but it is a strategy just like proxies and credentials that needs to be catered for. WSE3 and ServiceModel each have their own mechanisms to do this.
  • complexity of domain mappings: you will need to call the mapping concern to do this work but should only be a one-liner because you have designed and tested this somewhere else. It is worth noting the extent of difference though. With simple pass through calls some mappings are almost a simple value. Take for example, a calculation service upon return may be a simple value. However, with domain synching the domain mapping are somewhat a complex set of rules to get the service to accept your data. Actually, I often
  • error handling strategy: we are likely to want to catch exceptions and throw our kind so that we capture further up in the application (eg UI layer). With the use of lambdas this is straightforward and clean to try/catch method calls to the service client.
  • logging strategy particularly for debugging: you are going to need to debug payloads at some stage. Personally, I hate stepping through and that doesn’t help outside of development environments. So, a good set of logging is needed to. I’m still surprised how often code doesn’t have this.

Test automation pyramid

Now that we know what code we need to write, what types of tests are we likely to need? If you are unfamiliar with the test automation pyramid or my specific usage see test automation pyramid review for more details.

System Tests

  • combine methods for workflow
  • timing acceptance tests

Integration Tests

  • different scenarios on each method
  • exception handling (eg bad credentials)

Unit Tests

  • Each method with mock (also see mocking webservices)
  • exception handling on each method
  • mapper tests

More classes!

Without going into implementation details, all of this means that there is a boiler plate of likely classes. Here’s what I might expect to see in one of my solutions. A note on conventions: a slash ‘/’ denotes folder rather than file; <Service>\<Model>\<EachMethod> is specific to your project and indicates that there is likely to be one or more of that type; names with a .xml are xml and all others if not folders are .cs files.

  Core/
    ServiceReference/
      I<Service>Client
      <Service>Exception
    Domain/
      <Model>
      ServiceReference/
        Credential<Model>
  
  Infrastructure/
    Service References/                  <-- auto generated from Visual Studio ServiceModel
    Web Reference/                       <-- auto generated from Visual Studio WSE3
    ServiceReference/
      <Service>Client
      Mapper/
        <Model>Mapper
  
  Tests.System/
    Acceptance/
      ServiceReference/
        <Service>/
          WorkflowTest
          TimingTest
    Manual/
      ServiceReference/
         <Service>-soap-ui.xml
  
  Tests.Integration/
    ServiceReference/
      <Service>/
        <Method>Test
  
  Tests.Unit/
    ServiceReference/
      <Service>/
        ConstructorTests
        <Method>Test
        ExceptionsTest
        Mappers/
          <Model>Test
        Security/
          CredentialTest                  <-- needed if header credential
    Fixtures/
      ServiceReference/
        <Service>/
          Mock/Fake<Service>Client        
          Credentials.xml
          Request.xml                     <-- needed if WSE3
          Response.xml                    <-- needed if WSE3
      Domain/
        <Model>ObjectMother
        ServiceReference/
          Credential<Model>ObjectMother   <-- needed if header credential

That’s a whole lot more code!

Yup, it is. But each concern and test is now separated out and you can work through them independently and systematically. Here’s my point, you can deal with these issues now and have a good test bed so that when changes come through you have change tolerant code and know you’ve been as thorough as you can be with what you presently know. Or, you can deal with it later at integration time when you can ill afford to be the bottle neck in the high visible part of the process.

Potential coding order

Now, that we might have a boilerplate of options, I tend to want to suggest an order. With the test automation pyramid, I suggest to design as a sketch and domain model/services first. Then system test stub writing, then come back through unit and integration tests before completing\implementing the system test stubs. Here’s my rough ordered list:

  1. have a data mapping document – Excel, Word or some form table is excellent often provided by BAs – you still have to have some analysis of differences between your domain and their’s
  2. generate your Service Reference or Web Reference client proxy code – I want to see what the models and endpoints – I may play with them via soapUI – but usually leave that for later if at all needed
  3. write my system acceptance tests stubs – here I need to understand how these services fit into the application and what the end workflow is going to be. For example, I might write these as user story given/when/then scenarios. I do not try and get these implement other than compiling because I will come back to them at the end. I just need a touch point of the big picture.
  4. start writing unit tests for my Service Client – effectively, I am doing test-first creation of my I<Service>Client making sure that I can use each method with an injected Mock/Fake<Service>Client.
  5. unit test out my mappers – by now, I will be thinking about the request/response cycle and will need to creating ObjectMothers to be translated into the service reference domain model to be posted. I might be working in the other direction too – which is usually a straightforward mapping but gets clearer once you start integration tests.
  6. integration test on each method – once I have a good set of mappers, I’ll often head out to the integration point and check out how broken my assumptions about the data mapping are. Usually, as assumptions break down I head back into the unit test to improve the mappers so that the integration tests work – this is where the most work occurs!
  7. at this point, I now need good DEBUG logging and I’ll just ensure that I am not using the step-through debugger but rather good log files at DEBUG level.
  8. write system timing tests because sometimes there is an issue that the customer needs to be aware of
  9. system tests that can be implemented for the unit/integration tests for the method implements thus far
  10. add exception handling unit tests and code
  11. add credential headers (if needed)
  12. back to system tests and finish off and implement the original user stories
  13. finally, sometimes, we need to create a set of record-and-replay tests for other people testing. SoapUI is good for this and we can easily save them in source for later use.

Some problems

Apart from having presented any specific code, here are some problems I see:

  • duplication: your I<Service>Client and the proxy generated once is probably very similar with the difference it that your one returns your domain model objects. I can’t see how to get around this given your I<Service>Client is an anti-corruption class.
  • namespacing/folders: I have suggested ServiceReference/<Service>/ as folder structure. This is a multi-service structure so you could ditch the <Service> folder if you only had one.
  • Fixtures.ServiceReference.<Service>.Mock/Fake<Service>Client: this implementation is up to you. If you are using ServiceModel then you have an interface to implement against. If you are using WSE3 you don’t have an interface – try extending through @partial@s or wrapping with another class.
  1. No comments yet.
  1. No trackbacks yet.