There are various approaches to designing tests. Here are some suggestions:
In general a combination of the above may be appropriate
(e.g. targetted state tests with random data.)
Black-box testing. Tests ‘in ignorance’ of the implementation of the unit. The tester knows the interface and the intended function and attempts to find faults purely by applying input sequences.
White-box testing. The tests are designed by looking at the implementation and intentionally stressing parts of its design. This is also known as ‘glass-box testing’ and by other names.
Both have advantages and disadvantages. A black-box test may fail to
identify some special case which has to be ‘trapped’ in
the implementation. On the other hand if the code (or schematic) is
available it may influence the test strategy; verifying all the code
may suggest that the system works but what about the cases the
designer didn't think of?
It's all too easy to follow an existing plan.
The tester inputs some (probably simple) values to check the expected function of the system. For example a ‘black box’ test of a division unit could have inputs which are positive, negative and zero. [What is the defined behaviour of a division by zero? What about 0 ÷ 0?] Thus nine test cases may cover much of the behaviour.
Then consider some other questions. Is this an integer divide? What should happen/does happen when the result is non-integer? Is the rounding/truncation correct? Try some more appropriate cases.
So called “Monte Carlo” testing applies random input patterns. This is often a cheap way of generating wide test coverage and has an advantage in that it may ‘think of’ cases a human wouldn't. However it offers no guarantees of complete coverage.
The distribution of the random numbers may also be important:
The 32-bit ARM processor has an instruction Count Leading Zeros (CLZ) which counts the number of consecutive ‘0’ bits from the most significant end of a register. The result can therefore range from 0 to 32 (decimal).
The last case (32) is unusual in that, in binary, it is 100000 and it only occurs for one input 0000_0000 of the 232 possible inputs. This is unlikely to occur in a couple of million linearly distributed random inputs yet it is important in that it is the only combination to output a ‘1’ in bit 5. This should be tested!
Generating a random 32-bit pattern then right-shifting by (randomly) 0-31 places will generate a much more appropriate spread of test values here. (For example.) Think about the distribution of data elements.
Older Verilog standards support a $random system task which returns a 32-bit pseudorandom integer (which can subsequently be masked or divided into a desired range).
SystemVerilog augments this with some other potentially useful constructs:
If a module has internal state – and many do – this state also forms part of the input. Thus it may be necessary to test the unit with a particular set of inputs in each of its internal states. This can greatly enlarge the test set!
To ensure(?!) that the unit is in the correct state may require a deliberate test sequence beforehand. Thus the above categories should not be seen as exclusive: e.g. a particular set of inputs could be used to precede a random input set.
As the design size increases (i.e. more modules are integrated) control of internal states becomes increasingly hard. Thus confidence in each unit of the hierarchy is essential.
Up to testing.
Back to test coverage.
Forward to test ordering.