The general consensus is that even when you create an accurate representation of a production environment that is consistently scaled across all tiers of the application, you will still receive better performance metrics and a higher degree of confidence in the anticipated application behaviors by utilizing service virtualization.
There are just too any constraints that must all align perfectly to do sunny day performance testing. And even with this alignment, you will still have the very difficult, if not impossible, task of performing the appropriate level of negative and operational testing with an end-to-end environment. These same testing/validate activities are much easier to perform when using service virtualization.
I have listed a few examples of the constraints that are mitigated with service virtualization:
Earlier Performance and Resiliency Validation Capabilities
- Historically, you’ve had to wait until the required systems were at the appropriate quality level before starting performance validation. But now, you can leverage service virtualization along with existing performance testing tools and scripts and execute these testing scenarios during development. These tools now allow developers to quickly correct the issue while the code is still fresh in their minds. An ancillary benefit is the ability to introduce a greater depth and breadth of testing capabilities such as negative testing, operational testing and performance engineering.
Introduction of Comprehensive Negative Testing
- There are limited capabilities to adjust the response time from a downstream system, introduce a failure from a component that you call; interface with an external system and adjust the buffer size to conditions that do not currently exist within the test bed of data.
- Service virtualization provides a much more realistic representation of the production behavior by allowing the response time to vary, simulating production norms.
- Service virtualization allows for the “slow down” of back end systems, providing the capability to experience system behavior in a test environment. This type of testing can and will help identify potential constraints that will be stressed during these “abnormal production” scenarios.
- Service virtualization can accurately simulate for a “hard” shutdown of an application, or network component, to validate application behaviors.
Introduction of Operational Testing
- Once we were able to isolate and properly test each component to identify the normal behavior and breakpoint behaviors, we were then able to introduce monitor testing, testing of the system recovery tools and logging validation.
- Historically, we would set a threshold of 65% CPU utilization at the application layer. We would never validate that the monitoring worked in that the mainframe was only 5% of production, therefore we could never push the application tier north of 30% CPU due to constraints. Now that you can isolate each tier, you have the ability to drive the system to the breaking point, which will validate your monitoring warnings and alerts.
- By stressing a component to expected production behavior and then pushing it to the breaking point, we can now validate the system recovery tools and automation. The only way to ensure that the recovery tools/automation are working properly is to execute these conditions in test. We do not want to find out that there is a syntax error during a production event as this will increase the outage duration and customer impact.