In software, unlike in Las Vegas, you really can beat the house. Here’s how.


Everybody loves Las Vegas, a city that crackles with life, music, glitz and glamour. And, of course, there’s gambling, the foundation upon which all that fun was built. As everybody knows, only fools go to Vegas expecting to win. You expect to lose and hope to win. It’s part of the fun; eventually, the house always comes out on top. And, as the ad campaign famously said, what happens in Vegas stays in Vegas. Or should, anyway.

Unfortunately, too many c-level executives today think they can run their companies like they play craps or roulette. They ration their technology resources, in terms of software testing, on a hit-and-miss approach, thereby leaving themselves, their customers and their shareholders severely vulnerable to problems that can cost them everything.

You call that risk-based testing?

Many companies think they’re engaged in risk-based testing. The appeal is obvious. They don’t have the time or resources to run every possible test, so they set out to execute only those tests that are most likely to help avoid the most damaging defects. Perhaps their team sets out writing mocks and stubs to simulate scenarios, but because the team is only so large, it can cover only so many bets. What about all the other possibilities, imagined or unimagined?

In addition, those stubs used in development can’t be used by testers, and they fail to account for anything other than a “happy” scenario where other dependencies are functioning normally. On top of all that, stubs are created manually – by humans – and as such subject to human errors. It’s important to recognize that even when we see significant effort in a stub or mock from the development team, the test team cannot use those for themselves because they are ineffective for anything but development’s limited set of use cases. This is, in fact, one of the greatest issues with stubbing: It intentionally creates an unrealistic environment for the development team that is only sorted out when the QA group puts the code against the real system behaviors. Only then do we discover how unrealistic the stubs were. That’s why we see so many products today that fail.

Another problem with risk-based testing arises when risk metrics are assigned arbitrarily, based on assumptions made about how a poorly understood system should work. In this instance, ‘Risk-based’ testing risks having precisely the opposite effect, where all you are doing is reducing the amount of testing and leaving critical functionality exposed to defects. Effectively, you’ve become those Vegas gamblers who are all convinced they have the perfect system to beat the house. Spoiler alert: They always go home broke.

For testing to become truly “risk-based” today, tests must be rigorous, measurable and generated on the basis of an informed understanding of how a system is most likely to be used, and they must account for an unlimited number of potential outcomes. In Vegas terms, you need a testing system that covers nearly every bet. That’s where Service Virtualization comes in: It puts the odds in your favor by allowing you to cover a whole lot more bets.

How does it work? Check out a free event to find out

How? SV captures live data from systems your app-in-development needs to function properly. Your dev and test teams then have reliable information that can be used to provision data for tests and to create virtual services that are fully reactive in testing unlimited use cases. To fully understand the implications, consider your company’s need to test new apps against the myriad APIs they will encounter in the real world. APIs, basically virtual handshakes between enterprise apps, present particular testing challenges. Testers need a systematic approach to generate tests that are rigorous, but can be executed quickly. The tests require accurate test data and production-like environments, both of which might not be available — or available only at a cost — during your development. All of this must be done while dealing with multiple versions, orders and assemblies — and must be updated whenever the API is updated.

To demonstrate the value of such an approach, Techwell is teaming with CA Technologies is sponsoring a free virtual conference on Oct. 6. The live stream, live from the STARWest Conference, is at 11:15 a.m. Pacific time.  Click this link for details.

The session will demonstrate:

  • A structured, model-based approach to exhaustively testing individual APIs while chaining these tests together to form highly complex types of tests.
  • How API testing can be rigorous, proportional and realistic, generating accurate API tests based on empirically defined risk thresholds.
  • How dependency mapping can be used to update tests, data and environments consistently  whenever one or more APIs in the chain changes.

The interactive session will also touch on how developments in artificial intelligence can be leveraged to iteratively create and execute tests on-the-fly, as user behavior or environments change.

The bottom line is that you can’t just “test” the API that might mean success or failure when your new app goes live, any more than you can walk into a casino and place a sure bet.