A recent question on NAVLoadTest that asked about a possible “Combined Unit & load- testing module” using the NAV Application Test Toolset got me thinking about the differences between Application “Unit Tests” and Performance Tests. There are some important differences in the goals and the design of Performance Tests like the NAVLoadTest scenarios and Application Tests written using the NAV Application Test Toolset .
- Application Tests are designed to verify correct functionality of a module.
- Performance Tests are designed to measure some aspect of system performance.
- Application Tests are designed to test individual C/AL objects and methods in isolation. The tests are executed in the NAV Server only so there is no client-server communication involved.
- Performance Tests test end-to-end user interactions. They run using the NAV client service which is hosted in IIS. This means that the tests measure the resources consumed by the client layer, the NAV Server, SQL Server and the communications between those layers.
- Data Isolation:
- Application Tests are designed to be data-independent and be executed in isolation from other tests. Any changes done to the database through running of tests from the Test Tool are automatically rolled back using the Test Isolation feature.
- Performance Tests are dependent on existing data, create persistent data and are impacted by the data created by other tests. One of the goals of the Load Tests Scenarios is to observe how the test scenario performance changes as the dataset grows. One of the hardest parts of writing load test scenarios is ensuring that the test continue to run predictably as the dataset changes.
- Test Verification
- Application Test tests follow the “Arrange – Act – Assert” (see http://c2.com/cgi/wiki?ArrangeActAssert) pattern of unit tests. They ensure the state has not been changed unexpectedly during the test.
- Performance Tests have no way of controlling the initial state as other tests can be running concurrently on the same database. They must be resilient to changes in data and possible errors that occur during test execution and handle them appropriately as a user might. For example the “another user has locked the record” error occurs frequently in load tests when there is a concurrent user load.
There are probably more differences that I didn’t cover. When writing performance tests you may find it easy to start with the scenarios used in some application tests but I find that whenever I attempt to reuse an existing test as a performance test I end up needing to rewrite the test to cover situations that don’t occur in the original test.