Happy Halloween, everybody! We hope you’re not hiding any skeletons in your software closet. You know, goblins such as untested apps (gasp!), unrevealed glitches (eek!) or tests you’re planning to perform in production environments (boo!)
If you’re a CIO hiding any of the above, a new webinar from CA Technologies might help exorcise your demons.
The webinar, “Tales in Testing: Paranormal Service Virtualization Use Cases,” is a lively, hourlong primer in how Service Virtualization is helping companies reduce costs and development time while increasing service quality.
Event coordinator Alan Baptista, a product marketing manager for CA Service Virtualization interviews a panel of experts who offer their “tales of strange things,” problems that clients have solved with SV.
As Baptista explains, more than 80 percent of teams experience delays in development and quality assurance due to dependencies they can’t access for various reasons. Impediments to development can include system constraints (i.e., limited access to production environments or mainframes), data constraints (inability to access the right data set when they need it) or cost constraints (third-party charges to access systems and data).
According to research CA has conducted, more than half (56 percent) of crucial dependencies are unavailable when dev/test teams need them, resulting in prohibitive restrictions, time limits or added costs.
“There always are impediments, and things that keep us from doing what we need to do,” Baptista says. “These are the nightmares that keep us up at night.”
That’s where Service Virtualization comes in.
What does SV do?
The technology involves modeling the virtual service process and the imaging of software service behavior. By using SV, you can create a “stand-in” service – identical to the actual service – that can be used from the outset of development and all the way through testing.
“It’s actually just replicating the behavior, the requests and responses in the system, and what you’re trying to do,” Baptista says.
The process works by recording traffic between existing systems or through other means, such as drawing on log files, sample data or packet capture. That information is then processed and evaluated to create a “livelike model.” The third stage, Baptista says, is creation of a “living, breathing model” of the system in question, with sophisticated, contextual behavior and automatic handling for dynamic properties.
If you’re used to testing with manually created mocks and stubs, think of SV as a far more advanced, automated and dynamic approach of the same concept. While mocks and stubs are restricted to an individual unit test, SV involves “really allowing for shared resources and comprehensive reproduction of what people are trying to test up against” across the SDLC, Baptista says.
The webinar offers details information from a number of use cases, including:
Baptista offered the example of how Service Virtualization was used by the Autotrader sales site. The company avoided $300,000 in test hardware and software costs and reduced integration time from three days to three hours by creating what one IT executive calls “chaos in a box.”
That is, the company used SV to create disaster scenarios so it’s effectively ready for any eventuality. “Ultimately they decreased their software-defect hours by 25 percent,” Baptista says.
Large shipping and logistics firm
Nathan Devoll, a digital presales team lead at CA, tells of a large company whose service involves picking up and delivering packages. It was already using Service Virtualization to replicate mainframe service calls and integrate services with partner firms.
However, the company found a unique challenge for SV: It needed to integrate a new app that helps collect and transmit customer signatures in the form of PDF files.
“There’s no easy way to replicate that type of application,” Devoll says. “What they were able to do was create virtual services that acted as a signature pad.”
Using the virtual services, the company could run thousands of tests instead of using developers to pound away on signature pads.
“That was a really interesting way to leverage Service Virtualization,” Devoll says.
Large utility company
Rick Bansal, a CA product manager tells of a “scary” call he got from the CIO of a large utility who needed to test a new application that interacts with millions of smart meters. This CIO needed to be sure the app could handle the capacity of all those meters in the event of a crisis or mass outage.
“In a matter of a couple of hours, we turned around and built a testing harness that let them emulate the behavior of literally millions of smart meters on a laptop,” Bansal says.
The virtual system was so realistic that this CIO implemented it during a live performance test of the system and nobody even noticed, Bansal says.
“The folks that were doing the testing were just amazed,” he says. “They had no idea they were interacting with virtual services.”
Not only did the virtual service emulate the live system, but it allowed the company to test against a number of different disaster scenarios.
“If you asked what were the cost savings … imagine $200 or $300 per smart meter and do the math for 2 or 3 million smart meters they needed to emulate,” Bansal says. “They’ll never tell you they got that kind of benefit from it, but you can do the math yourself. Needless to say, that deal closed very rapidly.”
When it comes to developing and testing software better, cheaper and faster, SV is no trick. It’s a treat.
For a lot more details, tap into the free webinar and learn much more. It’s well worth your time.