A Direct Broadcast Satellite Service Provider: Case Study
Company is one of the world's leading providers of digital television entertainment services. Through its subsidiaries and affiliated companies in the United States, Brazil, Mexico and other countries in Latin America, company provides digital television service to approximately 20 million customers in the United States and over 10 million customers in Latin America. The company differentiates itself from a large field of competitors by continuously innovating and releasing new products and features, almost all of which have a heavy requirement for new software development and integration.
“That was the biggest constraint of manual stubbing -- it still left us depending on real systems. We needed a one-time virtualization step that could support multiple processes, functional, regression and performance testing.”
The IT group of the company, consisting of many development and integration teams, must deliver and ensure high performance levels for all critical customer-facing functionality, both on set-top boxes as well as through the company’s web portal, which supports both browsers and mobile devices.
Delivering the enhanced functionality customers demand is not as simple as developing a single “big-bang” upgrade. The IT release process involves collaboration among several teams, including separate engineering and development teams, and third-party partners and service providers.
“For our enterprise releases, we need to have continuous integration with our partners in other groups, who all have their own schedules and priorities. When we couldn’t align our schedules, we couldn’t release projects on time. We started looking for a solution that allowed us to continue development, and get our scope out of the way without heavy dependency on our partners,” said Group Lead.
- Building mock services to support dev and test costly and ineffective. “We had another project where we built integration with an outside vendor and it produced a different set of challenges, and we had to make them build stubs for us,” said Group Lead. “For instance, we had to deal with asynchronous transactions. So the third party would receive a request and then respond with an acknowledgement, but then asynchronously, it could be several seconds of wait time, it would then respond back with the actual session data.”
- Timing conflicts between personnel of multiple teams. “We have our own timeline, but other teams have their own timelines, and their releases and codebases are different,” said SQA Manager. “We expected to add 8-16 hours of time for developers to build a stub that could support just one event and one user for functional testing. But for performance testing, they need multiple users and multiple scenarios, and that approach was never going to work.”
- Partner systems unavailable for realistic, stateful performance test scenarios. The performance team wanted to test their middleware layer and simulate transactions to a 3rd party vendor, but they would not allow test traffic in their environment. “Each scenario has a certain visit ID and that has to be uniquely handled in their live system,” SQA Manager said. “In the live system that became a problem, because they only allow limited data and limited addresses for use in our testing process to get these visit ID’s back. We need to virtualize that 3rdparty provider, so there are no constraints on the visit IDs we submit to generate load on our middleware layer.”
- Need to reduce time to market for high-performance customer functionality. “In general we need to reach 99.99% availability at all times for our systems. For the last 2 years we never had one performance issue in production and we’re proud of that,” SQA Manager said. “If we cannot get end-to-end performance testing completed in time, we will be introducing a lot of risk. For us it’s about being able to deliver that faster, shorten the scope of the project, and shorten the overall time to market.”
“That was the biggest constraint of manual stubbing -- it still left us depending on real systems. We needed a one-time virtualization step that could support multiple processes, functional, regression and performance testing,” said SQA Manager.
Company selected Service Virtualization to solve the scheduling issues and cost of attempting to replicate complete test beds or manually stubbing out environments and data. “We did an evaluation of quite a few products that claimed similar functionality, but Service Virtualization offered us the most time-efficient solution and allowed us to get up and running quickly. It’s not the cheapest solution but we needed to be running within weeks,” said SQA Manager.
We started by training 12 individuals in the IT group. “We were very impressed with our speed of adoption, as we had a sustainable solution and models built for the majority of our dotcom performance and functional testing within 5 weeks, which left us not needing to access our partners’ systems anymore. Usually enterprise solutions take a lot longer to implement,” said SQA Manager. “Since then we are pretty much on our own, and we are able to tackle many projects at different stages.”
Here’s how Company used Service Virtualization to accelerate solution delivery:
- Use Service Virtualization instead of maintaining stubs. A complex service could take days for the team to stub, while providing little or no reuse. “For instance our Engineering group has a Programming Guide application,” says SQA Manager. “It has many data parameters, descriptions, cast, crew, channels, timing, and fees all of those listing details that fall under its service. So our plan is to capture any service API that exists today, virtualize that, and then just maintain the virtual service for any changes required for ongoing testing. Then when engineering completes any new service API, we do the first time testing with the real system, and once we have identified any issues and additional properties, we go ahead and post an updated Virtual Service.”
- Virtualize test data for the most dynamic scenarios first.“We identified some areas of our business, like a movie internet portal that was very critical. We used to store a database of test data in our own system, but it was changing too fast. We wanted to use real-time data, but the availability of that data became a challenge. That’s when we started rolling out Service Virtualization of our existing major services,” SQA Manager said.
“We may change the way we show channels, pay per view and video on demand – anything that customers want to view or record. So if a program that is available at midnight but that program listing changes at 6AM, we need make the test environment reflect that. That’s the only way we know they will go instantly go to the right channel and be in the time zone with the correct content.”
- Maintain state for complex regression scenarios. Service Virtualization’s ability to maintain context across scenarios was especially critical for the team. “There are more than 8 systems that one end-to-end scenario must correlate data within, and it used to take a lot of coordination and 1-2 weeks of effort,” SQA Manager said.
- Simulate performance characteristics of downstream systems. Service Virtualization can simulate the response times observed in production, or ratcheted up or down depending on the performance scenario desired. “We were executing performance testing against some non-native components that were owned by other engineering teams. Getting support from other teams was becoming a pain point, and our job was not to certify their solution, it was to test ours,” said SQA Manager.
- Extend virtualization to support custom objects in the environment. We created a few custom wrappers during their initial on-site engagement, to prove that Service Virtualization could simulate non-standard integration layers and data source types. This extensibility enables Company to flexibly support more systems as they are changed or integrated.
“We used to just get the requirements, do functional testing and confirm the functionality. Now we’ve increased the level of dynamic testing at least 200-300 percent, and in the last 2 years the focus has been changed, the team is working on structural testing and usability across multiple devices,” SQA Manager said. “This project goes throughout our IT organization. Obviously we can maintain our own environments, but long term we are doing more and more integration projects, and the use of environments must relate through our engineering partners.”
Using Service Virtualization, the extended Company IT team is able to eliminate days of labor from their release cycles, which typically lasted 3 weeks. “It used to be that everyone was building stubs as part of their process when they needed them, and across our Dotcom team that’s a huge number of hours,” says SQA Manager. “Now the entire team of Dev, Quality, and UAT are leveraging Service Virtualization. So while we only have one full-time and one half-time person building environments in the Service Virtualization suite directly, they support everyone much faster, while providing all those scenarios teams need.”
Results using Service Virtualization included:
- Shifted 85-95% of functional and regression testing coverage to Service Virtualization environments. The team went from stubbing only the simplest responses in 2 days each, to capturing reusable virtualized services for mobile devices, integration, SAP systems and third-party SaaS services that were formerly impossible to stub. “The amount of time it would take us to build stubs for any medium or high complexity service was very high,” SQA Manager said.
- Reduce test data preparation time by 80% or more. We always used to have significant lead time, data synchronization and troubleshooting issues with each cycle. It’s not just like you can have a fresh copy of production where all the data is untouched,” said SQA Manager. “Now we’ve saved significant time, as we came up with a set of data with Service Virtualization that we can reuse again and again, with none of the baggage of aligning schedules and readiness of the data for testing.”
- Decreased end-to-end cycle time by 1-5 days per new function, per project cycle (avg. 30% reduction). “Some scenarios are very dynamic. For instance, you have complex scenarios like the cutoff levels for overdue customers, and when to allow or disallow certain additional charges if they are past due for $100 dollars and 10 days, or more than $250 with 15 days. These scenarios weren’t impossible to do before, but the time and coordination effort required was huge: development teams, database administrations, support teams all getting involved for days. Now all of that is removed and we can create the scenarios in our Service Virtualization system,” SQA Manager said.
The performance testing team intends to bring Service Virtualization even closer to production by
utilizing some of Service Virtualization’s abilities for “pass-through” to the live systems for certain test cycles, as well as self-healing the data models from live production data when any unknown item is encountered in a run.
The company will continue to expand the use of Service Virtualization capabilities, both among IT and software engineering groups, and by virtualizing third parties such as credit card bureaus and cooperative partner companies such as Telco carriers.
“Service Virtualization can be used by one group, but the usage should span an organization rather quickly. Our engineering partners initially required us to look for additional solutions so we could better meet our timelines. Now I get messages all the time from Operations and development teams that have heard of these results, and they want to see and share this success,” said SQA Manager.
On the Dotcom side of functional testing, they intend to continually expand the use of Service Virtualization environments to support more engineering and partner development teams, thereby increasing the number and complexity of scenarios covered.
“With this success, we’re really focused on taking it to other applications,” says SQA Manager. “For instance we are building out Training back-end environments for our Customer Service reps, so our support agents won’t need to set up enterprise-level environments any more, or cause system downtime for training.”
Leading Global Telco: Case Study
For a leading global telecommunications firm, the development, support and maintenance of key enterprise applications is a mission-critical aspect of the business – from network operations, to billing, to customer service. Of the 700+ apps the company maintains, the provisioning of customer orders is one of the most critical of these software-enabled processes.
“As a goal, we calculated that a 20% improvement in test environment availability would produce a huge uplift in quality and value for the project, given the amount of resources and teams involved.”
–Lead Project Manager
To accurately bundle and reserve various product and service offerings around a phone number, dozens of dependent transactions and data records must be leveraged from multiple systems. A failure in this provisioning process could rapidly impact thousands of customers, so reliability and high quality is a business imperative.
The company’s provisioning application was experiencing periods of downtime which negatively impacted customers and resulted in millions of dollars in costs per incident. The company and its development and testing partners initiated a project to re-architect the application to make it more stable, less brittle to change, and better able to support expanded use cases.
- Scheduling and test data conflicts for live environments. The company’s live systems such as billing, pricing and records were too critical to allow development and testing teams to have direct access. A lack of available test environments meant that dependent systems were responsible for up to 70% of the team’s downtime during the lifecycle.
- Inability to virtualize systems for the test lab by conventional means. Hardware virtualization techniques were attempted, to replicate the distributed services and systems for performance testing purposes. These approaches were ineffective, as the assets were either too large (for instance, multiple terabyte mainframes, CORBA objects, etc.), inaccessible to the team (partner-owned, 3rd party services, etc.), or incomplete and not yet developed when needed.
- Need to engage Business and QA teams earlier to validate performance at a component level. The team was conducting UI-driven load tests of their applications, but couldn’t provide sufficient detail to resolve underlying performance problems. “When you push testing through a UI, that’s black-box testing,” said the director. “Service Virtualization allows white-box testing, so you can isolate and identify what makes up that transaction, piece by piece. Without that detail, a typical load test might tell me ‘It’s 22 seconds’ -- I know that’s slow, but I don’t know why.”
- Need to reduce the cost and effort to trace the root causes of defects. Since much of the business logic of the firms’ provisioning system is locked in middle-tier components and services, business and QA teams needed to wait until the end of each software lifecycle when the completed application UI was available to test. Unfortunately, this proved to be a manual testing process that provided little visibility into errors that occur within middle tiers.
“As a goal, we calculated that a 20% improvement in test environment availability would produce a huge uplift in quality and value for the project, given the amount of resources and teams involved,” said the lead project manager at the company.
Service Virtualization enabled the development and testing teams to virtualize elements of their
complex, multi-tier environment to significantly reduce downtime and lower costs. In particular, Service Virtualization’s ability to virtualize the dynamic behavior and performance of dependent systems enabled the 24/7 availability of realistic, testable systems, and faster development -- at a fraction of the cost and effort of attempting hardware virtualization and physical test lab approaches.
While the practice of service virtualization can be applied for automated functional, integration, performance, and system-wide testing targets, the firm also addressed a critical need: ensuring that defects would be eliminated earlier in the lifecycle.
Service Virtualization was leveraged in a variety of ways:
- Lower cost and effort using Virtual Service Environments instead of physical test beds. Service Virtualization was used to capture and simulate services, data, legacy systems (including mainframes, CORBA and RPC), and remote 3rd party applications to allow 24/7 access to Dev and Test teams across the software lifecycle. In comparison, trying to replicate these IT resources using physical or hardware-based virtualization approaches was recognized as being impractical, extremely time-intensive, and costing millions of dollars.
- Provide end-to-end transparency and traceability to improve productivity. UI and Acceptance testing can be labor-intensive efforts that provide little visibility into the root causes of defects that occur at the component and integration level. Service Virtualization allows non-developers to easily capture and provide a view of root causes to developers, for higher issue acceptance and resolution rates.
- Leverage “Pass-Thru” functionality to maintain realistic scenarios. In rapidly changingenvironments, ensuring that you have realistic responses for a current testing environment can consume significant resources. By enabling “pass-thru,” if Service Virtualization cannot generate a valid response, a valid response can dynamically be captured from the live application environment, refreshing the test scenarios with current behaviors.
- Test and collaborate earlier in the lifecycle. As a best practice, Service Virtualization encourages development organizations to open up component and integration-level testing efforts to earlier collaboration. Service Virtualization, code-free test assets and defect collaboration capabilities allow these teams to proactively find issues earlier when they are less costly to fix.
“Driving significant time-to-market and quality improvements takes more than just software,” said Professional Services manager for the project. “When dev teams are accountable to QA, they get more engaged in the quality process. In this release cycle, they were able to do more testing per function point, and decrease the defect escape ratio. Now more teams are getting involved in finding problems earlier in the lifecycle.”
“We deployed Service Virtualization immediately, and after 4 months, integration testing has never gone more smoothly,” said the firm’s project manager. “By isolating applications throughout integration testing, we were able to identify defects in a virtual environment, without the noise of the rest of the environment. That was a revelation for the provisioning app team, as they didn’t find any additional application issues after using the VSE to run it first.”
- Reduced development and integration testing cycle time by 40-70% over the first 6 months by eliminating dependencies. By virtualizing constrained components that were unavailable for development and testing, as well as virtualizing components that were not yet built from service definitions, the teams were able to show significant productivity improvements within 30 days with continued incremental gains moving forward.
- Shifted testing 1 to 2 phases earlier in the lifecycle and expanded test responsibilities. Using Service Virtualization, the firm was able to eliminate dependencies, and get business analysts and QA teams involved in finding defects at the Component, Integration and System-Wide level of the application by providing stable virtual models to test against.
- Avoided millions of dollars in test environment costs. Physically provisioning a multi-tier test lab environment containing all relevant dependencies at this company was absolutely cost and time prohibited. Using Service Virtualization, it was possible to capture the behaviors and data scenarios for a robust customer application environment within 30 days of project launch.
- Virtual data management enabled 500% more end-to-end test scenarios per cycle. For each 4 week testing cycle, the team would get only 20 valid phone numbers for end-to-end testing, and each end-to-end test consumes one of the numbers. “Once a number was consumed by the provisioning process, the data could not be reused, effectively limiting the team to 20 end-to-end tests per test cycle.” With VSE, this data could be reused an unlimited number of times.
- Achieved 103% ROI within 4 weeks. The company leveraged Service Virtualization to help accelerate improvements to their quality processes, enabling the project to be completed earlier and at lower costs than expected, while delivering hard-dollar ROI results even on a conservative scale.
“Word has gotten around the company that every team will be testing every component ahead of deployment time,” said the team’s project manager. “This has created an incentive for developers to create higher quality code, rather than have their work be visibly rejected before it can be sent over the wall to QA. They now can demo early and fix their own bugs.”
In the next phase of Service Virtualization adoption, the company is expanding the use of Service Virtualization environments for a broader range of IT assets, as well as leveraging Service Virtualization across more critical projects. Initial project objectives of increasing quality have carried forward, with new expectations to shorten the test cycle and reduce associated costs being included.
Global Telecommunications Provider: Customer Success Story
Deliver high-performance ecommerce with no defects before critical product launch
“The more projects and teams we apply virtualization and complete destructive testing to, the more confidence we release with.”
–Project Lead, Phone Launch Integration Team
- New mobile device coming out in 3 months, anticipated huge spike in order demand from retail and online channels.
- Need to check inventory, set up numbers and plans, provision and order phones and configure – more than 17+ unique systems involved and 100s of scenarios.
- Very limited window of access to live systems (one week total!) for dev & test.
- Just 8 week timeline to prove at least a 10x increase in performance over current levels of many key systems.
Solution using Service Virtualization
- Captured and created virtual services from all dependent systems in environment.
- Individually validated each service to perform at expected levels (3000+ TPS).
- Ran earlier massive functional and regression testing.
- Found and eliminated 3 show-stopper performance errors in order, activation and transaction process prior to launch.
- New Virtual infrastructure supported more than 10X increase in transaction volume testing over current tested levels.
- Completed implementation in less than 8 weeks - 60% cycle time reduction in integration & performance testing.
- Reduced environment infrastructure costs by 86%.
“The more projects and teams we apply virtualization and complete destructive testing to, the more confidence we release with.”
–Project Lead, Phone Launch Integration team
Global Telecommunications Corporation
Customer Success Story
- Transformation and modernization to the entire application platform which drives Enterprise Services.
- Agile methodologies are being introduced and embraced for this Program.
- Multiple components being developed in parallel, putting constraints on all teams.
- Performance of platform has to be validated for extremely high loads to meet customer SLAs.
- Current tooling from HP does not meet needs for this composite application platform.
Why Service Virtualization?
- Ability to quickly trace within the platform why something is not working.
- Developers can simulate dependent components that are being developed in parallel to speed up their delivery.
- Service Virtualization platform supports all of their modern technologies including Fusion and TIBCO.
- Ability to evolve performance testing into a continuous part of the dev & quality process.
- Improve efficiencies and reduce cycle time for releases – up to 30%.
- Reduced need for physical infrastructure.
- Much greater testing coverage and increased level of confidence around meeting performance SLAs.