Regression tests

Test process

Then

That way the final design still meets all the test criteria.

“Primum non nocere”
[“First do no harm.”]
Attr. Worthington Hooker

Regression tests are tests which ensure something hasn't been broken by later changes. These changes can occur for various reasons, such as:

An example of the latter case: imagine you are the Intel Corporation. Most of your business is founded on processors which are backwards compatible with earlier generations so it's vital that new devices can still run old code. You (presumably!) therefore keep a suite of old software – contrived to test every aspect you have thought of – to test compatibility of your latest device.

Testing can be boring, especially if you expect everything to pass. This should normally be the case following a minor edit to a(n apparently) unrelated block. If such retesting relies on human attention then it probably isn't going to be very reliable.

Regression tests should be automated, as much as possible. It is easy to see how this can be done in many cases. For example a processor can be given code which is self-checking; run it and it says ‘okay’ (or not). It's a bit harder in some systems, but an appropriate test harness can do a similar job. As machines do not get bored then this is a good solution and all you have to do is run it.

Of course these tests must be thorough and, occasionally, may highlight a problem. It it therefore useful if they contain some diagnostic information as well. It would be intimidating to know that there was a fault somewhere in your design but not know where!

The best time to put these tests in place is when the appropriate system is first developed. That is the time when you're concentrating on what could go wrong and, probably, discovering bizarre behaviour. Having thought of a test, do it tidily and add it to the regression test suite before moving on. It will save time later.

Are your tests actually working? Sometimes you may get a ‘false positive’ result where they flag up a fault that does not really exist. This may be a genuine mistake; it may be over-zealous checking of values which don't really matter. Whatever the cause it should draw attention and be fixed.

“Quis custodiet ipsos custodes?”
["Who will guard the guards themselves?"]
Juvenal

More worrying is the ‘false negative’ where a test misses a genuine problem. This could be something you didn't think of – you can't do much about that, you need independent reviewers to help – but it may be something you thought you tested for but there is a bug in the test suite.

Tests are normally augmented but not changed. Thus the tests should also be debugged when they are produced. To exercise tests there is a need to introduce deliberate faults into (copies of) the system and demonstrate that the tests pick these up satisfactorily.


Up to testing.

Back to reviewing.

Forward to next session.