Service Virtualization 101

SV Banner2It’s true what they used to tell you in school: There are no dumb questions. That’s especially true when learning about new approaches to software development. In that spirit, here are some basic explanations of service virtualization techniques and best practices.

For absolute beginners, here’s an FAQ to serve as a primer to get you going:

What is it?

Service Virtualization is the practice of capturing and simulating the behavior, data, and performance characteristics of dependent systems and then creating a virtual copy of those dependent systems. Those virtual copies, which behave precisely as a live system, can then be used independently of actual, live systems to develop software without any constraints. Thus, software can be developed and deployed faster, with lower costs and higher quality.

Gone are the days of waiting to test software-in-development against dependent systems or waiting to test software until the end of development. Testing becomes an ongoing part of development.

Who is it for?

Today, it’s pretty much for every viable company. Do you deliver services to customers over the internet? Then SV is for you. Do you enable sales and services teams through software? Then SV is for you, too. In fact, there are very few businesses today that do not depend on the ability to deliver the very latest software capabilities on an ongoing basis. In such a competitive environment, companies that fall behind are doomed to fail. That’s why the impact of service virtualization — the ability to turn around software iterations faster, better and cheaper — is so profound and far-reaching across industries.

How does it work?

Service virtualization creates an asset known as a virtual service (VS), which is a system-generated software object that contains the instructions for a plausible “conversation” between any two systems.

It is important that there must be real software automation involved in the capture and modeling of the virtual service. Otherwise we are still talking about the time-intensive, costly “stubs” developers could manually code and maintain on their own.

Let’s say your team is developing an update to a key application that must make requests of a downstream mainframe and a cloud-based partner service (SaaS). Both of those downstream systems are unavailable for you to use to run your regression and performance tests throughout development. So you replace them with virtual services and get to work. Think of the VS as a reliable stand-in for those constrained and costly applications that you don’t want to expose to the daily grind and dangers of being set up, used, and reset for testing and integration purposes by developers.

The fundamental process works this way:

  1. Capture: A “listener” is deployed wherever there is traffic or messages flowing between any two systems. Generally, the listener records data between the current version of the application under development and a downstream system that we seek to simulate.
  2. Model: Here the Service Virtualization solution takes the captured data and correlates it into a VS, which is a “conversation” of appropriate requests and responses that is plausible enough for use in development and testing. Sophisticated algorithms are employed to do this correctly.
  3. Simulate: The development team can now use the deployed Virtual Services on-demand as a stand-in for the downstream systems, which will respond to requests with appropriate data just as the real thing would, except with more predictable behaviors and much lower setup/teardown cost.

Remember, we say that a VS is “simulating” the constrained system for purposes of development and test, not “replaying” it in terms of a step-by-step sequence, as you would a recorded video. Sufficient dynamic logic must be captured and modeled into a VS to allow it to respond with enough intelligence to support the variability of needed usage scenarios. The VS should resemble the live system closely enough to make upstream applications and test users think that they are interacting with the real thing for most needed scenarios.

How has it improved software development?

The analysts at voke have been studying companies’ implementation of SV, and the results of those implementations, for several years. They have repeatedly found that companies using service virtualization experience lower costs, greater software quality and faster delivery. The latest voke research (get a free copy here), a survey of 500-plus companies, found:

  • Dramatically increased test rates. More than a quarter of companies doubled their test execution rates.
  • More than a third reduced test cycle times by at least 50 percent.
  • Nearly half of respondents saw a reduction of total defects of more than 40 percent.

Another study, released by Forrester, said SV approaches are gaining significant momentum. Said report author Diego Lo Guidice: “Companies can easily realize financially quantifiable quick wins: shorter test times, increased productivity, and better production quality. More strategically, companies building and continuously delivering modern applications are increasingly interested in SVT adoption.” Link to Forrester Wave Report.

Who are the major vendors of service virtualization technology?

The most-prominent vendors of the technology are CA Technologies, HP, IBM and Parasoft. An independent Forrester Wave analysis found CA and IBM leading the pack with what it called “rich, comprehensive, in-depth” capabilities. Both ended up in the upper-right portion of Forrester’s matrix. CA got the highest score for its latest offering. We recommend you read the whole analysis, which gets far more granular.