Archive

Posts Tagged ‘exploratory testing’

Manual Regressions: an oxymoron?

July 12th, 2009 No comments

I was at a testing talk the other night. It turned to automated testing which got me asking the question. Are manual regressions an oxymoron? I was thinking that no it isn’t if regression is simply the idea of re-testing. I would have thought that re-testing the application for fit-for-purpose is one type of regression, as it is that a particular business rule is applied. But it does make us think about why we are regression-ing in the way we are either manually or automated.

Luckily the speaker replied concurring with my thoughts:

No, I don’t think it’s an oxymoron. It may be cheaper to manually test some (hopefully rare) aspects repeatedly, where a very high cost of automation is unlikely to give sufficient payback over the lifetime of the project. An extreme version of that is automated usability testing, which we can’t do. But I agree that it’s a useful first-step generalisation, given that many people don’t see manual regression testing as an oxymoron under any circumstances.

How then would we capture these regression tests? Are they part of a test plan? How does the team know to do them? Are they part of the done criteria? How do we maintain these lists? How do we make sure that they are followed and that new members are inducted?

It seems to me that there should be scripted (not in the programatic sense) tests. Where these are held is debatable. In my teams, I would want them as close to the source code as possible (ie in the source code). I want the in plain text if at possible. I want them as short as possible. They should resemble smoke tests.

In teams that tend to keep the testers separate to developers, I see them kept in separate sources: excel, word, tracking systems (eg Test Warehouse) or bug trackers (eg JIRA) or systems that are supposed to link everyone (eg TFS 2010).

However, I find that usually these approaches try to hide that there is a lack of commitment to continuous integration, continuous deployments, good test automation strategy layering and a lot of multitasking and handoffs.