The bottom line of voke’s latest report on Service Virtualization

Cutting the time it takes to reproduce a software defect. Reducing the total number of defects. Quicker times to test software and get the stuff out the door.

These are just a few of the benefits that a recent report from voke media found for companies that use service virtualization, or the use of automation and simulation in software testing. (Click here to watch a recent webcast about the study.)

And yet voke’s 2014 survey work of 505 people in both tech and non-tech companies unearthed a peculiar issue: Even though more companies are using service virtualization, the technology has yet to achieve critical mass in the marketplace, according to Theresa Lanowitz, voke’s founder.

“Service virtualization should be a cornerstone of testing automation to remove the barriers to releasing software faster and with greater quality,” said Scott Edwards, director of product marketing at CA Technologies (disclaimer: CA sponsors this blog).

“Service virtualization is essential technology with strong and proven returns on investment to deliver software that drives optimal business outcomes and removes constraints throughout,” Edwards said.

In this installment of my two-part series on the voke report, I will look at some of the internal obstacles that survey respondents reported in getting their organizations onboard with service virtualization.

I will also detail some of Lanowtiz’s and Edwards’s recommendations for how fans of the technology can move their employers to embrace service virtualization.

So, what’s the holdup?

The voke study found that 69 percent of service virtualization solutions were being used at the project or departmental level. But at the enterprise level, that number shrank to 19 percent, the study found.

Lanowitz’s research uncovered a half-dozen primary reasons why companies were failing to adopt the technology at the enterprise level:

  • Key stakeholders don’t understand the value proposition of service virtualization;
  • An unwillingness in the testing area to expand beyond stuffing and mocking;
  • Lack of funding, especially when budgetary silos exist;
  • Limited resources for training;
  • The lack of an executive champion for the technology’s benefits, such as increasing collaboration and productivity;
  • Lack of education about what the technology is, why it’s important, and how it can help.

What about mocking and stubbing?

A common barrier organizational adoption of service virtualization is confusion between that testing approach, on the one hand, and the use of mocks and stubs on the other, the voke report says.

In the simplest terms, mocks are the software-testing equivalent of crash test dummies. They essentially are chunks of code that a developer creates to test how another piece of code behaves in certain circumstances.

Stubs are similar, although there is an ongoing online debate about the definitions of the two terms. (Here is how software sage Martin Fowler differentiates between the two.)

In any event, the voke report points out that stubbing and mocking can create quality problems in the underlying software. The reason: Stubs and mocks may not be an accurate reflection of the complete or final behavior of the dependent resource. That, in turn, can create a cascading set of problems, from introducing unnecessary and undetectable defects to functionality failures and increased costs to rework the code, the voke report says.

“From a testing perspective, stubs and mocks make the test suite ignore unavailable components,” the report states. “In doing so, vital components are left out of testing and may not be tested in the aggregate until a final end-to-end test prior to going live. And in the worst case, these components may not be tested at all prior to production.”

Service virtualization, on the other hand, represents realistic behavior (“livelike” is the term of art), allows for sharing across multiple teams in the software supply chain, allows modifications to create different conditions and behaviors, and represents “composite behavior and maintains statefulness,” the report asserts.

Bringing SV to your organization

Lanowitz’s report details six initial steps that companies can take to begin the process of embracing service virtualization:

  • Requiring virtualized assets when source code is checked in by development;
  • Virtualizing third party elements;
  • Virtualizing any part of the software supply chain that has fee-based access;
  • Virtualizing assets that are part of an organization’s core technology;
  • Virtualizing re-useable assets;
  • Requiring the use of service virtualization throughout the software supply chain by all participants.

“Ways to get started include working on small projects or certain types of assets,” she said.

Edwards added that an old adage applies: Eating an elephant one bite at a time.

“Find something of strategic importance and virtualize one aspect of it,” he said. “When you do that, you have (return on investment) numbers that you can showcase to management. Start small and go big. That’s what’s been successful.”