Home > Uncategorized > test-automation-pyramid-asp-net-mvc

test-automation-pyramid-asp-net-mvc

Test Automation Pyramid in ASP.NET MVC

This is a reposting of my comments from Mike Cohn’s Test Automation Pyramid

I often use Mike’s Test Automation Pyramid to explain to clients’ testers and developers how to structure a test strategy. It has proved the most effective rubric (say compared with the Brian Marick’s Quadrant’s model – as further evolved from Crispin and Gregory) to get people thinking about what going on in testing the actual application and its stress points. I want to add that JB Rainsberger’s talk mentioned above is crucial to understanding why that top level set of tests can’t prove integrity of the product by itself.

It has got me thinking that perhaps that we need to rethink some assumptions behind these labels. Partly because my code isn’t quite the same as say described here am suggesting something slightly different than say this approach The difference of opinion in this blogs also suggests this. So I thought I would spend some time talking about how I use the pyramid and then come back to rethinking its underlying assumptions.

I have renamed some parts of the pyramid so that at a first glance it is easily recognisable by clients. This particularly renaming is in the context of writing MVC web applications. I get teams to what their pyramid looks like for their project – or what they might want it to be because it is often upside down.

My layers:

  • System (smoke, acceptance)
  • Integration
  • Unit

I also add a cloud on top (I think from Crispin and Gregory) for exploratory testing. This is important for two reasons: (1) I want automated testing so that I can allow more time for manual testing and to emphasise that (2) there should be no manual regression tests. This supports Rainsberger’s argument not to use the top-level testing as proof of the systems integrity – to me the proof is in the use of the system. Put alternatively, automated tests are neither automating your tester’s testing nor are they a silver bullet. So if I don’t have a cloud people forget that manual testing is part of the automated test strategy (plus with a cloud when the pyramid is inverted it makes a good picture of ice cream in a cone and you can have the image of a person licking the ice cream and it falling off ;-) .)

In the context of an MVC application, this type pyramid has lead me to some interesting findings at the code base level. Like everyone is saying, we want to drive testing down towards the Unit tests because they are foundational, discrete and cheapest. To do this, it means that I need to create units that can be tested without boundary crossing. For an asp.net MVC (just like Rails), this means that I can unit test (with the aid of isolation frameworks):

  • models and validations (particularly using ObjectMother)
  • routes coming in
  • controller rendering of actions/views
  • controller redirection to actions/views
  • validation handling (from errors from models/repositories)
  • all my jQuery plugin code for UI-based rendering
  • any HTML generation from HtmlHelpers (although I find this of little value and brittle)
  • any of course all my business “services”

I am always surprised at how many dependencies I can break throughout my application to make unit tests – in all of these cases I don’t not need my application to be running in a webserver (IIS or Cassini). They are quick to write, quick to fail. They also require additional code to be written or libraries to be provided (eg MvcContrib Test Helpers).

For integration tests, I now find that the only piece of the application that I still requires a dependency is the connection to the database. Put more technically, I need to check that my repository pattern correctly manages my object’s lifecycle and its identity; it is also ensuring that I correctly code the impedance mismatch between the object layer of my domain and relational layer of the database. In practice, this is ensuring a whole load of housekeeping rather than business logic: eg my migrations scripts are in place (eg schema changes, stored procs); my mapping code (eg ORM) and that the code links all this up correctly. Interestingly, I now find that this layer in terms of lines of code is less than the pyramid suggests because there is a lot of code in a repository service that can be unit tested – it is really only the code that checks identity that requires a real database. The integration tests left tend then to map linearly to the CRUD functions. I follow the rule, one test per dependency. If my integration tests get more complicated it is often time to go looking for domain smells – in the domain driven design sense I haven’t got that bounded context right for the current state/size of the application.

For the top layer, like others I see it as the end-to-end tests and it covers any number of dependencies to satisfy the test across scenarios.

I have also found that there are actually different types of tests inside this layer. Because it is web application, there is the smoke test – some critical path routes that show that all the ducks are lined up – selenium, watir/n and even Steve Sanderson’s MVCIntegationTest are all fine. I might use these tests to target parts of the application that are known to be problematic so that I get as earlier a warning as possible.

Then there are the acceptance tests. This is where I find the most value not only because it links customer abstractions of workflow with code but also as importantly because it makes me attend to code design. I find that to run maintainable acceptance tests you need to create yet another abstraction. Rarely can you just hook up the SUT api and it works. You need setup/teardown data and various helper methods. To do this, I explicitly create “profiles” in code for the setup of data and exercising of the system. For example, when I wrote a Banner delivery tool for a client (think OpenX or GoogleAds) I needed to create a “Configurator” and an “Actionator” profile. The Configurator was able to create a number banner ads into the system (eg html banner on this site, a text banner on that site) and the Actionator then invoked 10,000 users on this page on that site. In both cases, I wrote C# code to do the job (think an internal DSL as a fluent interface) rather than say in fitnesse.

Why are these distinctions important? A few reasons. The first is that the acceptance tests in this form are a test of the design of the code rather than the function. I always have to rewrite parts of my code so that the acceptance tests can hook in. It has only ever improved my design such as separation of concerns and it often has given my greater insight into my domain model and its bounded contexts. For me, these acceptance tests are yet another conversation with my code – but by the time I have had unit, integration and acceptance test conversations about the problem the consensus decision isn’t a bad point to be at.

Second is that I can easily leverage my DSL for performance testing. This is going help me in the non-functional testing (or the fourth quarter of the Test Quadrants model).

Third is that this is precisely the setup you need for a client demo. So at any point, I can crank up the demo data for the demo or exploratory testing. I think it is at this point that we have a closed loop: desired function specified, code to run, and data to run against.

Hopefully, that all makes some sense. Now back to thinking about the underlying assumptions of what is going on at each layer. I think we are still not clear on what we really testing at each layer in the pyramid: most tend to be around the physical layers, the logical layers or the roles within the team. For example, some are mapping it to the MVC particularly because the V maps closely to the UI. Others are staying in a traditional unit, functional and integration partly because the separation of roles within a team.

I want to suggest that complexity is a better underlying organisation. Happy to leave the nomenclature alone: the bottom is where there are no dependencies (unit), the second has one dependency (integration) and top have as many as you need to make it work (system). It seems to me that the bottom two layers require you to have a very clear understanding of your physical and logical architecture expressed in terms of boxes and directed lines ensure that you test each line for every boundary.

If you look back to my unit tests it identified logical parts of the application and tested at boundaries. Here’s one you might not expect. The UI is often seen as a low value place to test. Yet, frameworks like jQuery suggest otherwise and breakdown our layering: I can unit test a lot of the browser code which is traditionally seens as UI layer. I can widgetize any significant interactions or isolate any specific logic and unit test this outside the context of the application running (StoryQ has done this).

The integration tests tested across a logical and often physical boundary. It has really only one dependency. Because there is one dependency the nature of complexity here is still linear. One dependency equals no interaction with other contexts.

The top level is all about putting it together so that people across different roles can play with the application and use complex heuristics to check its coding. But I don’t think the top level is really about the user interface per se. It only looks that way because the GUI is most generalised abstraction that we believe that customers and testers believe that they understand the workings of the software. Working software and the GUI should not be conflated. Complexity at the top-most level is that of many dependencies interacting with each other – context is everything. Complexity here is not linear. We need automated system testing to follow critical paths that create combinations or interactions that we can prove do not have negative side effects. We also need exploratory testing which is deep, calculative yet ad hoc that attempts to create negative side effects that we can then automate. Neither strategy aspires for illusive, exhaustive testing – or as JB Rainsberger argues – which is the scam of integration testing.

There’s a drawback when you interpret the pyramid along these lines. Test automation requires a high level of understanding of your solution architecture, its boundaries and interfaces, the impedance mismatches in the movement between them, and a variety of toolsets required to solve each of these problems. And I find requires a team with a code focus. Many teams and managers I work with find the hump of learning and its associated costs too high. I like the pyramid because I can slowly introduce more subtle understandings of the pyramid as the team gets more experience.

PostScript

I have just been trawling through C# DDD type books written for Microsoft focussed developers looking for the test automation pyramid. There is not one reference to this type of strategy. At best, one book .NET Domain-Driven Design with C#: Problem – Design – Solution touches on unit testings. Others mention that good design helps testability right at the end of the book (eg Wrox Professional ASP.NET Design Patterns). These are both books that are responding the Evans’ and Nilsson’s books. It is a shame really.

  1. December 3rd, 2010 at 11:41 | #1

    Thank you for this very unique blog of yours. I can not start to picture your sources for these insights, but this has made a good imprint on me. Expecting there is more to come.

  1. June 22nd, 2010 at 06:40 | #1
  2. June 22nd, 2010 at 06:40 | #2