A property and casualty insurer, providing personal and commercial insurance products nationwide, is in the process of re-architecting its complex insurance policy systems to meet broader insurance market needs. By replacing many legacy data transfer and business process technologies with an ESB and SOA-based architecture, the firm is implementing systems that can flexibly scale and deliver new functionality faster to meet the needs of a larger insurance marketplace.
“Basically we started with a quick tactical project, showed immediate value, and that became the compelling reason to bring Service Virtualization in house.”
– Director of Architecture and Performance
“Policy Administration is basically the heart and soul of an Insurance company,” said the Director of Architecture and Performance for the company. “Revenue depended on going live quickly with a new multi-tier system for policy management, and scaling up to full volume with a system to support the specific rules of all 50 states presents a big performance challenge.”
For insurers, delivering a solution that meets performance Service Level Agreements (SLAs) is necessary for top-line revenue growth, enabling the rapid roll out of new insurance offerings to meet customer needs. The new policy management architecture needed to answer these key demands:
- Scheduling and test data conflicts for live environments. Frequently the company’s live systems such as quoting, pricing and records are either too busy to allow test use, or they were unavailable, resulting in test environments that were often limited to only 2 hours of sustained
- Testing per week. Further complication was provided by third-party systems such as DMV or Credit Bureaus that sometimes charged transaction fees. The potential for performance degradation, data corruption and unexpected access costs stymied performance test efforts for many states.
- Inability to virtualize systems for the test lab by conventional means. Using hardware virtualization techniques to attempt to replicate the distributed services and systems for performance testing purposes didn’t work, as the assets were either too large (for instance, multiple terabyte mainframes), inaccessible to the team, or incomplete when needed.
- Inability to isolate performance testing results to a component level. The team was conducting UI-driven load tests of their applications, which didn’t provide enough detail. “When you push testing through a UI, that’s black-box testing,” said the director. “Service Virtualization allows white-box testing, so you can isolate and identify what makes up that transaction, piece by piece. Without that detail, a typical load test might tell me ‘It’s 22 seconds’ -- I know that’s slow, but I don’t know why.”
- Immediate need to meet a Performance Budget for a live, complex system. The firm’s agents and users have expected SLA thresholds for how much time is acceptable for a response to any policy-related request. The director describes this service level expectation: “For instance, say we have a “performance budget” of 20 seconds, so within that, I need to determine how much time each of the isolated components that makes up that budget is spending, in order to know where additional tuning is needed.”
“We had the classic problem of ‘You can’t test anything, until you have everything,’ said the director. “Before Service Virtualization, we did end-to-end testing from a user interface, and we couldn’t simulate the dependent systems we needed, or ‘wrap around’ any of the isolated tiers in our architecture so we had something available to run our tests against. We tried coding our own stubs, which required more maintenance than we expected, so we decided we needed more tooling. That’s when our development partner suggested we get started with Service Virtualization, because they already had people with some experience using it.”
Service Virtualization offered the development and testing teams the broad ability to test, validate
and virtualize most elements of their complex, multi-tier environment. In particular, Service Virtualization’s ability to virtualize the dynamic behavior and performance of dependent systems enabled the 24/7 availability of realistic, testable systems, and faster development at a fraction of the cost and effort of attempting hardware virtualization and physical test lab approaches.
While the practice of service virtualization can be applied for automated functional and regression testing purposes, the firm first addressed the area where there was the most critical need: performance testing and tuning the newly deployed systems to ensure that expected service levels were met. “This first project was about creating virtual test environments as a target for performance testing,” said the director. We understood there is also a functional aspect to testing that we can address later.”
Here’s how the firm and their implementation partner teams applied Service Virtualization:
- Decrease cost and effort by managing Virtual Test Environments instead of physical test beds. Service Virtualization’s ability to capture and simulate services, data and 3rd party apps such as SaaS and SOA made testing efforts more productive. “When I have multiple performance testers, and the production environment is going to be down, I understand that I need to create a simulation, and simulate a set of valid results back to them when they are testing a service,” said the director.
- Achieve more visibility into component-level performance issues. UI or Acceptance testing approaches were never complete enough to obtain relevant results by themselves. “Load testing a UI gives me an aggregate performance that may be unacceptable at 22 seconds, and then Service Virtualization gets me the isolation to break it down and find out each step that is contributing to that 22 seconds, so that I can address it,” the director said.
- Got started quickly with skilled services teams. “We needed to get a quick win on this activity fast, so we worked with our System Integrator partner resources familiar with Service Virtualization. Before we even had dedicated resources, we had skilled people with the tools on site, and we were able to deliver valuable results almost immediately.”
- Drive high-volume Load Tests from both Service Virtualization and HP testing solutions, and measure results. Service virtualization made existing test lab tools more productive, and the firm’s project team pushed performance tests of 15,000 or more tests per hour against the Quoting Engine (HTTPS), the WebSphere Message Broker (WMB), as well as directly against underlying services (SOAP). “We also proved that we could use Service Virtualization and HP’s LoadRunner together – they didn’t collide,” said the director. “In practice they worked well together and comparing Service Virtualization’s middle-tier results and LoadRunner UI testing metrics on the same screen was very beneficial.”
“We use Service Virtualization to do isolation testing of each of the components,” said the director. “Basically, we build up our performance budget, so we know exactly where in the transaction chain the time is being spent. So whether that is the Quoting system query, or the message broker, or how much time is spent getting a credit rating from a third party, and how much time is spent getting a response from the print Service. When you can do isolation at a component level against the performance budget, you can identify the weak link in the chain and address it.”
“We installed Service Virtualization quickly, and now we are going through formal training and institutionalizing of Service Virtualization within the company,” said the director. “Basically we started with a quick tactical project, showed immediate value, and that became the compelling reason to bring Service Virtualization in house.”
- Reduced release cycle time by 3-4 months. The team went from test schedule overruns to unlimited virtualized performance test access, which enabled them to exceed expectations and start the next phase of their project 4 months early.
- Increased test runs by 10X, and regain 56% more development and testing time in the lab. Using Service Virtualization, the firm was able to eliminate nagging test outages, regaining as much as 100 hours of lost test environment availability in a typical 8 week testing cycle. “With Service Virtualization, you don’t have to worry about what happens downstream. You can basically hit then Virtual Service and get the kind of responses you expect without worrying about data preparation,” said the director.
- Avoid 95% of 3rd Party Service testing costs. By eliminating the need to pay per-transaction costs and spend time waiting for test windows to systems such as Credit Reporting and DMV registries in each state, the firm was able to test and tune performance against these virtually, without the expense and test data variability of working with these live service providers for most performance tests.
- Achieved ROI within 8 weeks. Collaboration with a key partner already skilled in Service Virtualization, along with a strategic resource, greatly accelerated the quality process, enabling the project to be completed earlier and at lower cost than expected.
“I think that our success was a combination of skilled and knowledgeable resources, and the understanding of how flexible you need to be in this environment,” said the director. “During virtualization, you run into unexpected challenges in trying to wrap simulations around so many components. Now I know our team will be able to move quickly with Service Virtualization, adjust and get virtual services up and running in a very short time.”
With Service Virtualization, the firm’s performance team is successfully breaking down performance testing results into a performance budget for each process in their architecture, so response times can be systematically reduced to meet overall SLA goals. High responsiveness is key to customer retention in a competitive, high-volume insurance marketplace.