Why Stubbing’s ‘Canned Answers’ Are No Longer Good Enough

One of the best ways to explain the advantages of Service Virtualization to newcomers is by comparing it to the age-old practice of manual stubbing. With stubbing, a piece of code is written to stand in for some other program’s functionality – it is supposed to simulate behavior of that program.

However, as John Michelsen and Jason English wrote in Service Virtualization: Reality is Overrated, stubbing is a poor simulator for every scenario except for the one specifically addressed. The stub used in development can’t be used by testers, and it fails to account for anything other than a “happy” scenario. Moreover, it is created manually – by humans – and as such subject to human errors.

As software pundit Martin Fowler wrote in 2007, “stubs provide canned answers to calls made during the test, usually not responding to anything outside what’s programmed in for the test.”

That inherent limitation is what Service Virtualization was designed to eradicate.

Here’s an excerpt from the Michelsen/English book that elaborates:

Automation Eliminates Manual Stubbing and Maintenance

Before Service Virtualization, if we were developing a web UI and didn’t want to wait around, we would build a stub to generate a couple expected responses from the next layer down, (i.e., the web service). Then the web service developers might stub out their underlying ESB layers, or try to mock up some of the user requests from the web UI, and so on.

Unfortunately, this is a manual process that is never sufficient to encapsulate the many types of connections and data that exist within enterprise software architectures. Just keeping up with the variability and constant changes of other systems becomes a never-ending process in itself. In addition, the stubbing of those underlying layers may be completely stalled if the UIs or downstream systems aren’t yet coded.

Before-and-after automated capture of virtual services is shown in the graphic above. A Service Virtualization solution automatically builds virtual services from observed live messages and transactions, system logs, and definition documents, allowing development and test to proceed without wasting time waiting or building and maintaining inadequate stubs.

The Critical Capability is Automation

The critical capability here is the automation of virtual service creation and data maintenance. This automation happens first during the process of listening to messages and capturing the virtual service from live traffic, or generating the virtual service from a design document or transaction log. The initial creation should require very little intervention and time on the part of developers. As a rule of thumb, the resulting services should be, on average, 90–95 percent complete for the scenarios needed by the team. Otherwise, the solution by definition is not automated. Usually, letting observation run on a fairly active message stream for a few minutes, or around 1,000 transactions, allows patterns to be recognized and provides plenty of data to populate a virtual service. Of course, teams may manually model or tweak the virtual service to add scenarios that couldn’t be gathered in the automated capture process — things like very rare edge scenarios and nonfunctional use cases.

It’s important to recognize that even when we see significant effort in a stub or mock from the development team, the test team cannot use those for themselves because they are ineffective for anything but development’s limited set of use cases. This is, in fact, one of the greatest issues with stubbing: It intentionally creates an unrealistic environment for the development team that is only sorted out when the QA group puts the code against the real system behaviors. Only then do we discover how unrealistic the stubs were. Practically every customer we know has this sense of their projects’ defect discovery being too late. Leveraging SV both in development and QA will make it happen faster.