Ensuring that an ERP application works after it is deployed at a customer site is a challenge. The base application, which is both broad and deep, has a myriad of modules, features, and software and hardware configurations. ISVs extend or modify this base application to provide functionality for specific market needs and verticals. Finally, implementation partners or customers perform point customizations to address specific needs. This combination of base application, one or more ISV add-ons, and point customizations that operate in a unique hardware and software configuration is what businesses depend on for mission-critical financial data and company operations.
Because of all modifications implemented, it is important to test the solution thoroughly, so we are sure that none of it caused any form of system issues.
Testing process
There are several ways to test developed functionalities depending which phase testing is performed in.
Development phase
Peer reviews
Software inspections, which are a rigorous approach to peer reviews, are defined as “peer review of any work product by trained individuals who look for defects using a well-defined process” (Wikipedia). The most important aspect of peer reviews is that multiple people think about, and work on, the same problem, and the focus is on identifying defects that can be prevented before the next phase of development.
Static analysis
Static analysis tools evaluate the software code (source or object). Tools such as the Microsoft Dynamics AX Best Practice Checks (Best Practices for Microsoft Dynamics AX Development) and Visual Studio Code Analysis (Best Practices for Microsoft Dynamics AX Development) identify potential violations of programming and design guidelines. Like peer reviews, static analysis tools catch quality issues early in the development. This prevents costly downstream discovery of issues when the software is tested. In addition to the predefined checks, checks that are created for specific issues can be performed.
Testing phase
Unit testing
In software engineering, unit testing has grown significantly in the past few years. This is a bottom-up testing approach in which automated tests are written by the developer. Justifying the development of unit tests in parallel with product code can be a challenge. One argument against unit testing is that it takes highly paid developers away from writing the production code. Although this is a reasonable argument over the short term, the benefits of writing unit tests are significant. These benefits include finding bugs earlier, providing a safety net of tests for changes that are made later, and improving design. Over the long term, unit testing improves customer satisfaction and developer productivity.
Functional testing
As new functionality (also called a feature) is developed, testers should validate that the requirements are being met by that functionality. Testing at the feature level enables a fast turnaround on defects, which improves the efficiency of the development process. As the Sure Step definition implies, the majority of the function testing of features in the Microsoft Dynamics AX ecosystem is done by domain experts. These domain experts have various titles – for example, functional consultant, business analyst, department head, functional solution architect, and power user.
Visual Studio 2010 brings some powerful testing tools to the Microsoft Dynamics AX ecosystem. The new Microsoft Test Manager (MTM) application is specifically targeted at a test team that is made up of domain experts. MTM contains functionality for the entire testing cycle, and is an enabler for both a test lead and the functional testers. For more information about how to use MTM, see Testing the Application. MTM is available in the Visual Studio Ultimate and Visual Studio Test Professional packages.
Significant work went into Microsoft Dynamics AX 2012 to enable the data collectors in MTM to accurately record user actions when a test is run. The tester can then “fast forward” through the test case when it is rerun later.
Subprocess, process, integration, and user acceptance testing
Subprocess, process, integration, and user acceptance testing are all forms of broader-scope tests, as defined in Sure Step. After the new functionality is validated in isolation, as described in the previous section, it has to be validated as part of a business process or user workflow. The scope of these tests varies. One test might be a single-user task that is performed independently, whereas another test might encompass several users or roles in the system as it tracks a workflow across the company’s activities. Examples of the latter include the “quote to cash” and “procure to pay” processes.
Data management
Effective data management is critical for ERP testing efforts. Data sets must be sufficiently complex and large to enable effective functional validation, but not so large that deploying the data for test systems is excessively time consuming.
Throughout the Microsoft Dynamics AX 2012 development cycle, the development team created and maintained a data set that struck a balance between functional completeness and size. This data set, known as the Contoso data set, is being made available externally as demo data, together with instructions for loading the data. This data set is a good starting point for either an ISV product development effort or the early phases of a new implementation.
One key to effective testing is ensuring that the system is in a known state when a test is started. This is especially true for automated testing, because a human who runs a test manually can more effectively deal with an unknown state than a computer, which requires a specific state. The following are some strategies for effectively maintaining a known state at the start of a test:
- Reset the system to a known state at the start of each test or group of tests. For simple tests that affect only part of the system, scripts can be written to clean specific tables and restore a base set of data. For more complex scenarios, an effective approach is to maintain a database backup and restore it. The Microsoft Dynamics AX development team creates “save points” of the database in the desired states, and then restores these save points in the setup portion of a test group.
- Reset the system to a known state at the end of each test or group of tests. For SysTest-based tests, built-in functionality enables SysTest to track changes that are made during the test, and then restore the system to the pretest state during test clean-up.
Use of automated testing tolls
Regression testing is “any type of software testing that seeks to uncover new errors, or regressions, in existing functionality after changes have been made to the software, such as functional enhancements, patches or configuration changes” (Wikipedia). The most common approach to regression testing is to rerun previously run tests to verify that the application’s behavior has not changed. Ideally, there would be an automated test tool that software testers could use to run the same tests repeatedly. Unfortunately, automation tools, particularly record and playback tools, have a mixed history in software engineering. The tools have historically been unstable, because of sensitivities in the execution environment and other factors. They often create volumes of code that is very difficult to maintain. Frequently, the maintenance costs quickly exceed the cost savings. This is why automated testing of ERP is always a subject of discussions.
Exit criteria for testing
An important part of completing a milestone, such as conduct testing, is the creation and review of milestone exit criteria. These criteria formed the “definition of done” for the milestone, and were made up of product criteria and engineering criteria. The following are some examples of the testing exit criteria:
-
Product criteria:
- All functional test cases (automated and manual) are run, failures are reviewed, and bugs are created for all failures.
- All upgrade test cases (automated and manual) are run, failures are reviewed, and bugs are created for all failures.
- Targeted functional test cases are run in selected international environments, failures are reviewed, and bugs are created for all failures.
- All bugs that meet a targeted severity and priority must be fixed and retested.
- Accessibility test cases are run, failures are reviewed, and bugs are created for all failures.
- End-to-end scenarios are run and meet targeted quality goals. And many more.
-
Engineering criteria:
- No static analysis errors or warnings occur.
- The percentage of priority 1 test cases that are automated must meet a target.
- The percentage of all test cases that are automated must meet a target.
- Code coverage for each area of the system should meet a target, and coverage gaps should be reviewed. And many more.
The post Microsoft Dynamics AX testing challenges appeared first on Merit Solutions.